Compare commits

...

52 commits

Author SHA1 Message Date
e6b39c274c
refactor: lots and lots of writergate changes 2025-09-24 22:30:18 +02:00
96e8100373
Merge branch 'master' into zig-0.15 2025-09-24 18:33:33 +02:00
bcef17a466
fix: make sure we don't destroy file_path before navigating in open_file mini mode 2025-09-24 13:59:56 +02:00
622d65497a
feat: add helix mode keybindings for keypad keys 2025-09-23 22:58:52 +02:00
82c11c64f3
feat: add keybindings for keypad navigation keys 2025-09-23 22:52:21 +02:00
14dbc08bcf
feat: add string mappings for keypad key events 2025-09-23 22:31:12 +02:00
Jonathan Marler
5cc6724a07 win32 gui: center double-wide characters 2025-09-23 22:14:29 +02:00
Jonathan Marler
921f094509 workaround crash when rendering some utf8 on win32 gui
closes #194

Ignores cells that have graphemes with more than 1 codepoint rather than
crash.
2025-09-23 22:14:29 +02:00
Jonathan Marler
2790dcfd11 add some new text to the font test 2025-09-23 22:14:29 +02:00
Jonathan Marler
05b87b1406 finish win32 gui support for double-wide characters 2025-09-23 22:14:29 +02:00
8278a080af fix: actually use staging_size in WindowState.generateGlyph 2025-09-23 22:14:29 +02:00
a9d4fed205 feat: support wide characters in win32 gui
closes #132
2025-09-23 22:14:29 +02:00
f7496654ae
feat: add vim mode aliases for buffer commands
This adds these vim mode specific commands:

:bd (Close file)
:bw (Delete buffer)
:bnext (Next buffer/tab)
:bprevious (Previous buffer/tab)
:ls (List/switch buffers)

closes #296
2025-09-23 15:52:18 +02:00
be758be087
feat: make delete_buffer command with no argument delete the current buffer 2025-09-23 15:51:27 +02:00
024eb8b43b
build: improve nightly build release notes 2025-09-23 15:20:13 +02:00
34594942c7
build: add source tarballs to nightly builds 2025-09-23 15:19:45 +02:00
15e27a6104
build: add option to allow uploading dirty nightly builds 2025-09-23 13:36:04 +02:00
bfba9ab810
build: get latest nightly build version from git.flow-control.dev 2025-09-23 13:33:50 +02:00
6a84c222d0
build: reverse upload order of nightly builds 2025-09-23 13:23:56 +02:00
f2b1451b3e
build: do not mark nightly builds as pre-release on codeberg 2025-09-23 13:22:13 +02:00
5445651776
build: fix typo in nightly build release notes 2025-09-23 13:21:31 +02:00
db16c26f0c
build: add nightly build uploads to codeberg.org and git.flow-control.dev 2025-09-23 13:07:24 +02:00
366dde0144
build: read github tag name with jq in make_nightly_build 2025-09-23 13:04:40 +02:00
a870254166
build: improve nightly release notes commit references 2025-09-22 22:02:15 +02:00
099444f84d
build: use commit hash in nightly release notes 2025-09-22 21:57:24 +02:00
733c24ca16
build: add version to nightly build release notes 2025-09-22 21:50:50 +02:00
87a72195d7
build: misc clean-ups in make_nightly_build 2025-09-22 21:48:55 +02:00
7555331c1f
build: fix make_nightly_build release notes query 2025-09-22 21:41:44 +02:00
7c6712d7a4
build: add explicit repo parameter to gh release create in make_nightly_build 2025-09-22 21:37:47 +02:00
0006a056db
build: add version check to make_nightly_build 2025-09-22 21:37:29 +02:00
d611f74cfb
build: fix git log call in make_nightly_build 2025-09-22 21:31:06 +02:00
e92f4fe9b1
build: add nightly build helper script 2025-09-22 21:20:41 +02:00
52996ed57d
feat: make AST keybindings more intuitive 2025-09-22 13:07:03 +02:00
1ef77601e3
feat: allow next/previous sibling functions to work with no selection 2025-09-22 13:06:53 +02:00
8100e7d52b
refactor: improve const correctness in AST navigation functions 2025-09-22 12:58:10 +02:00
30af629a1a
refactor: expose CurSel.to_selection method 2025-09-22 12:55:31 +02:00
99dc805817
feat: add flow mode keybinds for unnamed AST sibling movement 2025-09-22 12:26:43 +02:00
60016a3d03
feat: improve expand_selection by selecting top selection matching node 2025-09-22 12:26:43 +02:00
4035cefcaf
feat: add optional integer arguments to goto and goto_offset commands 2025-09-17 23:05:21 +02:00
2461717f11
feat: add support for byte offsets in file links to navigate command 2025-09-17 22:47:50 +02:00
7228a604b0
feat: add byte offset support to vim style '+' cli arguments
This adds support for using `+b{offset}` on the command line.
2025-09-17 22:46:35 +02:00
219b8cd00a
feat: support byte offsets in file links
This adds support for a 'b' prefix to the first file link argument
to denote a byte offset.

`{path/to/file.ext}:b{offset}`
2025-09-17 22:42:25 +02:00
7c5a22c959
feat: add goto_offset keybind "b" in goto mini mode
This effectively makes `ctrl+g b` the goto_offset keybinding.
2025-09-17 22:18:45 +02:00
30a457158c
feat: add goto_offset mini mode and command 2025-09-17 22:18:20 +02:00
18cd62ba7e
feat: add editor goto_byte_offset command 2025-09-17 22:17:48 +02:00
935b178d89
feat: add Buffer.Node.byte_offset_to_line_and_col and testcase 2025-09-17 22:17:00 +02:00
1658c9e3b4
refactor: add crlf mode testcase for Buffer.Node.get_byte_pos 2025-09-17 22:16:07 +02:00
933126e2a0
feat: add support for {row}:{column} syntax in goto mini mode 2025-09-17 20:39:45 +02:00
59921d8e10
feat: restore cursor column when cancelling goto mini mode
This commit refactors the numeric_input mini mode to make the input value
type generic. This allows the goto mini mode to store the origin column
along with the row. Also, this will allow more complex numeric_input modes,
for example a goto mini mode that supports column and row.
2025-09-17 10:04:27 +02:00
9bdc3e0a0a
fix: handle completion items with no type icon
superhtml fix
2025-09-13 20:06:44 +02:00
76600bc6bd
fix: handle completion items with no insert and/or replace coordinates
superhtml fix
2025-09-13 20:06:04 +02:00
67b214675f
refactor: log issues in LSP completion item messages 2025-09-13 20:05:28 +02:00
39 changed files with 1331 additions and 527 deletions

View file

@ -30,8 +30,8 @@
.hash = "fuzzig-0.1.1-Ji0xivxIAQBD0g8O_NV_0foqoPf3elsg9Sc3pNfdVH4D",
},
.vaxis = .{
.url = "git+https://github.com/neurocyte/libvaxis?ref=zig-0.15#db66bacf4799945c9d99704bf150112df2d688cf",
.hash = "vaxis-0.5.1-BWNV_J0eCQD_x1nEGocK1PgaTZU9L81qc9Vf3IIGx7W2",
.url = "git+https://github.com/neurocyte/libvaxis?ref=main#35c384757b713c206d11efd76448eaea429f587e",
.hash = "vaxis-0.5.1-BWNV_GQeCQBDaH91XVg4ql_F5RBv__F_Y-_eQ2nCuyNk",
},
.zeit = .{
.url = "git+https://github.com/rockorager/zeit?ref=zig-0.15#ed2ca60db118414bda2b12df2039e33bad3b0b88",

175
contrib/make_nightly_build Executable file
View file

@ -0,0 +1,175 @@
#!/bin/bash
set -e
for arg in "$@"; do
case "$arg" in
--no-github) NO_GITHUB=1 ;;
--no-codeberg) NO_CODEBERG=1 ;;
--no-flowcontrol) NO_FLOWCONTROL=1 ;;
--allow-dirty) ALLOW_DIRTY=1 ;;
esac
done
builddir="nightly-build"
DESTDIR="$(pwd)/$builddir"
BASEDIR="$(cd "$(dirname "$0")/.." && pwd)"
APPNAME="$(basename "$BASEDIR")"
title="$APPNAME nightly build"
repo="neurocyte/$APPNAME-nightly"
release_notes="$BASEDIR/$builddir-release-notes"
cd "$BASEDIR"
if [ -e "$DESTDIR" ]; then
echo directory \"$builddir\" already exists
exit 1
fi
if [ -e "$release_notes" ]; then
echo file \""$release_notes"\" already exists
exit 1
fi
DIFF="$(git diff --stat --patch HEAD)"
if [ -z "$ALLOW_DIRTY" ]; then
if [ -n "$DIFF" ]; then
echo there are outstanding changes:
echo "$DIFF"
exit 1
fi
UNPUSHED="$(git log --pretty=oneline '@{u}...')"
if [ -n "$UNPUSHED" ]; then
echo there are unpushed commits:
echo "$UNPUSHED"
exit 1
fi
fi
# get latest version tag
if [ -z "$NO_FLOWCONTROL" ]; then
last_nightly_version=$(curl -s https://git.flow-control.dev/api/v1/repos/neurocyte/flow-nightly/releases/latest | jq -r .tag_name)
elif [ -z "$NO_GITHUB" ]; then
last_nightly_version=$(curl -s "https://api.github.com/repos/$repo/releases/latest" | jq -r .tag_name)
elif [ -z "$NO_CODEBERG" ]; then
last_nightly_version=$(curl -s https://codeberg.org/api/v1/repos/neurocyte/flow-nightly/releases/latest | jq -r .tag_name)
fi
[ -z "$last_nightly_version" ] && {
echo "failed to fetch $title latest version"
exit 1
}
local_version="$(git --git-dir "$BASEDIR/.git" describe)"
if [ "$1" != "--no-github" ]; then
if [ "$local_version" == "$last_nightly_version" ]; then
echo "$title is already at version $last_nightly_version"
exit 1
fi
fi
echo
echo "building $title version $local_version... (previous $last_nightly_version)"
echo
git log "${last_nightly_version}..HEAD" --pretty="format:neurocyte/$APPNAME@%h %s"
echo
echo running tests...
./zig build test
echo building...
./zig build -Dpackage_release --prefix "$DESTDIR/build"
VERSION=$(/bin/cat "$DESTDIR/build/version")
git archive --format=tar.gz --output="$DESTDIR/flow-$VERSION-source.tar.gz" HEAD
git archive --format=zip --output="$DESTDIR/flow-$VERSION-source.zip" HEAD
cd "$DESTDIR/build"
TARGETS=$(/bin/ls)
for target in $TARGETS; do
if [ -d "$target" ]; then
cd "$target"
if [ "${target:0:8}" == "windows-" ]; then
echo packing zip "$target"...
zip -r "../../${APPNAME}-${VERSION}-${target}.zip" ./*
cd ..
else
echo packing tar "$target"...
tar -czf "../../${APPNAME}-${VERSION}-${target}.tar.gz" -- *
cd ..
fi
fi
done
cd ..
rm -r build
TARFILES=$(/bin/ls)
for tarfile in $TARFILES; do
echo signing "$tarfile"...
gpg --local-user 4E6CF7234FFC4E14531074F98EB1E1BB660E3FB9 --detach-sig "$tarfile"
sha256sum -b "$tarfile" >"${tarfile}.sha256"
done
echo "done making $title $VERSION @ $DESTDIR"
echo
/bin/ls -lah
cd ..
{
echo "## commits in this build"
echo
git log "${last_nightly_version}..HEAD" --pretty="format:neurocyte/$APPNAME@%h %s"
echo
echo
echo "## contributors"
git shortlog -s -n "${last_nightly_version}..HEAD" | cut -b 8-
echo
echo "## downloads"
echo "[flow-control.dev](https://git.flow-control.dev/neurocyte/flow-nightly/releases/tag/$VERSION) (source only)"
echo "[github.com](https://github.com/neurocyte/flow-nightly/releases/tag/$VERSION) (binaries & source)"
echo "[codeberg.org](https://codeberg.org/neurocyte/flow-nightly/releases/tag/$VERSION) (binaries & source)"
} >"$release_notes"
cat "$release_notes"
ASSETS=""
if [ -z "$NO_FLOWCONTROL" ]; then
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.tar.gz"
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.tar.gz.sig"
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.tar.gz.sha256"
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.zip"
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.zip.sig"
ASSETS="$ASSETS --asset $DESTDIR/flow-${VERSION}-source.zip.sha256"
echo uploading to git.flow-control.dev
tea releases create --login flow-control --repo "$repo" --tag "$VERSION" --title "$title $VERSION" --note-file "$release_notes" \
$ASSETS
fi
if [ -z "$NO_CODEBERG" ]; then
for a in $DESTDIR/*; do
ASSETS="$ASSETS --asset $a"
done
echo uploading to codeberg.org
tea releases create --login codeberg --repo "$repo" --tag "$VERSION" --title "$title $VERSION" --note-file "$release_notes" \
$ASSETS
fi
if [ -z "$NO_GITHUB" ]; then
echo uploading to github.com
gh release create "$VERSION" --repo "$repo" --title "$title $VERSION" --notes-file "$release_notes" $DESTDIR/*
fi

View file

@ -48,18 +48,18 @@ pub fn send_request(
method: []const u8,
m: anytype,
ctx: anytype,
) (OutOfMemoryError || SpawnError)!void {
var cb = std.ArrayList(u8).init(self.allocator);
) (OutOfMemoryError || SpawnError || std.Io.Writer.Error)!void {
var cb: std.Io.Writer.Allocating = .init(self.allocator);
defer cb.deinit();
try cbor.writeValue(cb.writer(), m);
return RequestContext(@TypeOf(ctx)).send(allocator, self.pid.ref(), ctx, tp.message.fmt(.{ "REQ", method, cb.items }));
try cbor.writeValue(&cb.writer, m);
return RequestContext(@TypeOf(ctx)).send(allocator, self.pid.ref(), ctx, tp.message.fmt(.{ "REQ", method, cb.written() }));
}
pub fn send_notification(self: *const Self, method: []const u8, m: anytype) (OutOfMemoryError || SendError)!void {
var cb = std.ArrayList(u8).init(self.allocator);
pub fn send_notification(self: *const Self, method: []const u8, m: anytype) (OutOfMemoryError || SendError || std.Io.Writer.Error)!void {
var cb: std.Io.Writer.Allocating = .init(self.allocator);
defer cb.deinit();
try cbor.writeValue(cb.writer(), m);
return self.send_notification_raw(method, cb.items);
try cbor.writeValue(&cb.writer, m);
return self.send_notification_raw(method, cb.written());
}
pub fn send_notification_raw(self: *const Self, method: []const u8, cb: []const u8) SendError!void {
@ -82,27 +82,27 @@ pub const ErrorCode = enum(i32) {
RequestCancelled = -32800,
};
pub fn send_response(allocator: std.mem.Allocator, to: tp.pid_ref, cbor_id: []const u8, result: anytype) (SendError || OutOfMemoryError)!void {
var cb = std.ArrayList(u8).init(allocator);
pub fn send_response(allocator: std.mem.Allocator, to: tp.pid_ref, cbor_id: []const u8, result: anytype) (SendError || OutOfMemoryError || std.Io.Writer.Error)!void {
var cb: std.Io.Writer.Allocating = .init(allocator);
defer cb.deinit();
const writer = cb.writer();
const writer = &cb.writer;
try cbor.writeArrayHeader(writer, 3);
try cbor.writeValue(writer, "RSP");
try writer.writeAll(cbor_id);
try cbor.writeValue(cb.writer(), result);
to.send_raw(.{ .buf = cb.items }) catch return error.SendFailed;
try cbor.writeValue(writer, result);
to.send_raw(.{ .buf = cb.written() }) catch return error.SendFailed;
}
pub fn send_error_response(allocator: std.mem.Allocator, to: tp.pid_ref, cbor_id: []const u8, code: ErrorCode, message: []const u8) (SendError || OutOfMemoryError)!void {
var cb = std.ArrayList(u8).init(allocator);
pub fn send_error_response(allocator: std.mem.Allocator, to: tp.pid_ref, cbor_id: []const u8, code: ErrorCode, message: []const u8) (SendError || OutOfMemoryError || std.Io.Writer.Error)!void {
var cb: std.Io.Writer.Allocating = .init(allocator);
defer cb.deinit();
const writer = cb.writer();
const writer = &cb.writer;
try cbor.writeArrayHeader(writer, 4);
try cbor.writeValue(writer, "ERR");
try writer.writeAll(cbor_id);
try cbor.writeValue(cb.writer(), code);
try cbor.writeValue(cb.writer(), message);
to.send_raw(.{ .buf = cb.items }) catch return error.SendFailed;
try cbor.writeValue(writer, code);
try cbor.writeValue(writer, message);
to.send_raw(.{ .buf = cb.written() }) catch return error.SendFailed;
}
pub fn close(self: *Self) void {
@ -172,6 +172,8 @@ const Process = struct {
sp_tag: [:0]const u8,
log_file: ?std.fs.File = null,
log_file_path: ?[]const u8 = null,
log_file_writer: ?std.fs.File.Writer = null,
log_file_writer_buf: [1024]u8 = undefined,
next_id: i32 = 0,
requests: std.StringHashMap(tp.pid),
state: enum { init, running } = .init,
@ -181,7 +183,7 @@ const Process = struct {
const Receiver = tp.Receiver(*Process);
pub fn create(allocator: std.mem.Allocator, project: []const u8, cmd: tp.message) (error{ ThespianSpawnFailed, InvalidLspCommand } || OutOfMemoryError || cbor.Error)!tp.pid {
pub fn create(allocator: std.mem.Allocator, project: []const u8, cmd: tp.message) (error{ ThespianSpawnFailed, InvalidLspCommand } || OutOfMemoryError || cbor.Error || std.Io.Writer.Error)!tp.pid {
var tag: []const u8 = undefined;
if (try cbor.match(cmd.buf, .{tp.extract(&tag)})) {
//
@ -194,15 +196,15 @@ const Process = struct {
}
const self = try allocator.create(Process);
errdefer allocator.destroy(self);
var sp_tag_ = std.ArrayList(u8).init(allocator);
var sp_tag_: std.Io.Writer.Allocating = .init(allocator);
defer sp_tag_.deinit();
try sp_tag_.appendSlice(tag);
try sp_tag_.appendSlice("-" ++ sp_tag);
try sp_tag_.writer.writeAll(tag);
try sp_tag_.writer.writeAll("-" ++ sp_tag);
self.* = .{
.allocator = allocator,
.cmd = try cmd.clone(allocator),
.receiver = Receiver.init(receive, self),
.recv_buf = std.ArrayList(u8).init(allocator),
.recv_buf = .empty,
.parent = tp.self_pid().clone(),
.tag = try allocator.dupeZ(u8, tag),
.project = try allocator.dupeZ(u8, project),
@ -220,11 +222,14 @@ const Process = struct {
req.value_ptr.deinit();
}
self.allocator.free(self.sp_tag);
self.recv_buf.deinit();
self.recv_buf.deinit(self.allocator);
self.allocator.free(self.cmd.buf);
self.close() catch {};
self.write_log("### terminated LSP process ###\n", .{});
if (self.log_file) |file| file.close();
if (self.log_file) |file| {
if (self.log_file_writer) |*writer| writer.interface.flush() catch {};
file.close();
}
if (self.log_file_path) |file_path| self.allocator.free(file_path);
}
@ -265,12 +270,13 @@ const Process = struct {
self.sp = tp.subprocess.init(self.allocator, self.cmd, self.sp_tag, .Pipe) catch |e| return tp.exit_error(e, @errorReturnTrace());
tp.receive(&self.receiver);
var log_file_path = std.ArrayList(u8).init(self.allocator);
var log_file_path: std.Io.Writer.Allocating = .init(self.allocator);
defer log_file_path.deinit();
const state_dir = root.get_state_dir() catch |e| return tp.exit_error(e, @errorReturnTrace());
log_file_path.writer().print("{s}{c}lsp-{s}.log", .{ state_dir, std.fs.path.sep, self.tag }) catch |e| return tp.exit_error(e, @errorReturnTrace());
self.log_file = std.fs.createFileAbsolute(log_file_path.items, .{ .truncate = true }) catch |e| return tp.exit_error(e, @errorReturnTrace());
log_file_path.writer.print("{s}{c}lsp-{s}.log", .{ state_dir, std.fs.path.sep, self.tag }) catch |e| return tp.exit_error(e, @errorReturnTrace());
self.log_file = std.fs.createFileAbsolute(log_file_path.written(), .{ .truncate = true }) catch |e| return tp.exit_error(e, @errorReturnTrace());
self.log_file_path = log_file_path.toOwnedSlice() catch null;
if (self.log_file) |log_file| self.log_file_writer = log_file.writer(&self.log_file_writer_buf);
}
fn receive(self: *Process, from: tp.pid_ref, m: tp.message) tp.result {
@ -441,7 +447,7 @@ const Process = struct {
}
fn handle_output(self: *Process, bytes: []const u8) Error!void {
try self.recv_buf.appendSlice(bytes);
try self.recv_buf.appendSlice(self.allocator, bytes);
self.write_log("### RECV:\n{s}\n###\n", .{bytes});
self.frame_message_recv() catch |e| {
self.write_log("### RECV error: {any}\n", .{e});
@ -473,9 +479,9 @@ const Process = struct {
const id = self.next_id;
self.next_id += 1;
var request = std.ArrayList(u8).init(self.allocator);
var request: std.Io.Writer.Allocating = .init(self.allocator);
defer request.deinit();
const msg_writer = request.writer();
const msg_writer = &request.writer;
try cbor.writeMapHeader(msg_writer, 4);
try cbor.writeValue(msg_writer, "jsonrpc");
try cbor.writeValue(msg_writer, "2.0");
@ -486,32 +492,32 @@ const Process = struct {
try cbor.writeValue(msg_writer, "params");
_ = try msg_writer.write(params_cb);
const json = try cbor.toJsonAlloc(self.allocator, request.items);
const json = try cbor.toJsonAlloc(self.allocator, request.written());
defer self.allocator.free(json);
var output = std.ArrayList(u8).init(self.allocator);
var output: std.Io.Writer.Allocating = .init(self.allocator);
defer output.deinit();
const writer = output.writer();
const writer = &output.writer;
const terminator = "\r\n";
const content_length = json.len + terminator.len;
try writer.print("Content-Length: {d}\r\nContent-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n", .{content_length});
_ = try writer.write(json);
_ = try writer.write(terminator);
sp.send(output.items) catch return error.SendFailed;
self.write_log("### SEND request:\n{s}\n###\n", .{output.items});
sp.send(output.written()) catch return error.SendFailed;
self.write_log("### SEND request:\n{s}\n###\n", .{output.written()});
var cbor_id = std.ArrayList(u8).init(self.allocator);
var cbor_id: std.Io.Writer.Allocating = .init(self.allocator);
defer cbor_id.deinit();
try cbor.writeValue(cbor_id.writer(), id);
try cbor.writeValue(&cbor_id.writer, id);
try self.requests.put(try cbor_id.toOwnedSlice(), from.clone());
}
fn send_response(self: *Process, cbor_id: []const u8, result_cb: []const u8) (error{Closed} || SendError || cbor.Error || cbor.JsonEncodeError)!void {
const sp = if (self.sp) |*sp| sp else return error.Closed;
var response = std.ArrayList(u8).init(self.allocator);
var response: std.Io.Writer.Allocating = .init(self.allocator);
defer response.deinit();
const msg_writer = response.writer();
const msg_writer = &response.writer;
try cbor.writeMapHeader(msg_writer, 3);
try cbor.writeValue(msg_writer, "jsonrpc");
try cbor.writeValue(msg_writer, "2.0");
@ -520,27 +526,28 @@ const Process = struct {
try cbor.writeValue(msg_writer, "result");
_ = try msg_writer.write(result_cb);
const json = try cbor.toJsonAlloc(self.allocator, response.items);
const json = try cbor.toJsonAlloc(self.allocator, response.written());
defer self.allocator.free(json);
var output = std.ArrayList(u8).init(self.allocator);
var output: std.Io.Writer.Allocating = .init(self.allocator);
defer output.deinit();
const writer = output.writer();
const writer = &output.writer;
const terminator = "\r\n";
const content_length = json.len + terminator.len;
try writer.print("Content-Length: {d}\r\nContent-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n", .{content_length});
_ = try writer.write(json);
_ = try writer.write(terminator);
sp.send(output.items) catch return error.SendFailed;
self.write_log("### SEND response:\n{s}\n###\n", .{output.items});
sp.send(output.written()) catch return error.SendFailed;
self.write_log("### SEND response:\n{s}\n###\n", .{output.written()});
}
fn send_error_response(self: *Process, cbor_id: []const u8, error_code: ErrorCode, message: []const u8) (error{Closed} || SendError || cbor.Error || cbor.JsonEncodeError)!void {
const sp = if (self.sp) |*sp| sp else return error.Closed;
var response = std.ArrayList(u8).init(self.allocator);
var response: std.Io.Writer.Allocating = .init(self.allocator);
defer response.deinit();
const msg_writer = response.writer();
const msg_writer = &response.writer;
try cbor.writeMapHeader(msg_writer, 3);
try cbor.writeValue(msg_writer, "jsonrpc");
try cbor.writeValue(msg_writer, "2.0");
@ -553,19 +560,20 @@ const Process = struct {
try cbor.writeValue(msg_writer, "message");
try cbor.writeValue(msg_writer, message);
const json = try cbor.toJsonAlloc(self.allocator, response.items);
const json = try cbor.toJsonAlloc(self.allocator, response.written());
defer self.allocator.free(json);
var output = std.ArrayList(u8).init(self.allocator);
var output: std.Io.Writer.Allocating = .init(self.allocator);
defer output.deinit();
const writer = output.writer();
const writer = &output.writer;
const terminator = "\r\n";
const content_length = json.len + terminator.len;
try writer.print("Content-Length: {d}\r\nContent-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n", .{content_length});
_ = try writer.write(json);
_ = try writer.write(terminator);
sp.send(output.items) catch return error.SendFailed;
self.write_log("### SEND error response:\n{s}\n###\n", .{output.items});
sp.send(output.written()) catch return error.SendFailed;
self.write_log("### SEND error response:\n{s}\n###\n", .{output.written()});
}
fn send_notification(self: *Process, method: []const u8, params_cb: []const u8) Error!void {
@ -573,9 +581,9 @@ const Process = struct {
const have_params = !(cbor.match(params_cb, cbor.null_) catch false);
var notification = std.ArrayList(u8).init(self.allocator);
var notification: std.Io.Writer.Allocating = .init(self.allocator);
defer notification.deinit();
const msg_writer = notification.writer();
const msg_writer = &notification.writer;
try cbor.writeMapHeader(msg_writer, 3);
try cbor.writeValue(msg_writer, "jsonrpc");
try cbor.writeValue(msg_writer, "2.0");
@ -588,19 +596,20 @@ const Process = struct {
try cbor.writeMapHeader(msg_writer, 0);
}
const json = try cbor.toJsonAlloc(self.allocator, notification.items);
const json = try cbor.toJsonAlloc(self.allocator, notification.written());
defer self.allocator.free(json);
var output = std.ArrayList(u8).init(self.allocator);
var output: std.Io.Writer.Allocating = .init(self.allocator);
defer output.deinit();
const writer = output.writer();
const writer = &output.writer;
const terminator = "\r\n";
const content_length = json.len + terminator.len;
try writer.print("Content-Length: {d}\r\nContent-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n", .{content_length});
_ = try writer.write(json);
_ = try writer.write(terminator);
sp.send(output.items) catch return error.SendFailed;
self.write_log("### SEND notification:\n{s}\n###\n", .{output.items});
sp.send(output.written()) catch return error.SendFailed;
self.write_log("### SEND notification:\n{s}\n###\n", .{output.written()});
}
fn frame_message_recv(self: *Process) Error!void {
@ -609,11 +618,11 @@ const Process = struct {
const headers_data = self.recv_buf.items[0..headers_end];
const headers = try Headers.parse(headers_data);
if (self.recv_buf.items.len - (headers_end + sep.len) < headers.content_length) return;
const buf = try self.recv_buf.toOwnedSlice();
const buf = try self.recv_buf.toOwnedSlice(self.allocator);
const data = buf[headers_end + sep.len .. headers_end + sep.len + headers.content_length];
const rest = buf[headers_end + sep.len + headers.content_length ..];
defer self.allocator.free(buf);
if (rest.len > 0) try self.recv_buf.appendSlice(rest);
if (rest.len > 0) try self.recv_buf.appendSlice(self.allocator, rest);
const message = .{ .body = data[0..headers.content_length] };
const cb = try cbor.fromJsonAlloc(self.allocator, message.body);
defer self.allocator.free(cb);
@ -627,9 +636,9 @@ const Process = struct {
const json = if (params) |p| try cbor.toJsonPrettyAlloc(self.allocator, p) else null;
defer if (json) |p| self.allocator.free(p);
self.write_log("### RECV req: {s}\nmethod: {s}\n{s}\n###\n", .{ json_id, method, json orelse "no params" });
var request = std.ArrayList(u8).init(self.allocator);
var request: std.Io.Writer.Allocating = .init(self.allocator);
defer request.deinit();
const writer = request.writer();
const writer = &request.writer;
try cbor.writeArrayHeader(writer, 7);
try cbor.writeValue(writer, sp_tag);
try cbor.writeValue(writer, self.project);
@ -638,7 +647,7 @@ const Process = struct {
try cbor.writeValue(writer, method);
try writer.writeAll(cbor_id);
if (params) |p| _ = try writer.write(p) else try cbor.writeValue(writer, null);
self.parent.send_raw(.{ .buf = request.items }) catch return error.SendFailed;
self.parent.send_raw(.{ .buf = request.written() }) catch return error.SendFailed;
}
fn receive_lsp_response(self: *Process, cbor_id: []const u8, result: ?[]const u8, err: ?[]const u8) Error!void {
@ -650,9 +659,9 @@ const Process = struct {
defer if (json_err) |p| self.allocator.free(p);
self.write_log("### RECV rsp: {s} {s}\n{s}\n###\n", .{ json_id, if (json_err) |_| "error" else "response", json_err orelse json orelse "no result" });
const from = self.requests.get(cbor_id) orelse return;
var response = std.ArrayList(u8).init(self.allocator);
var response: std.Io.Writer.Allocating = .init(self.allocator);
defer response.deinit();
const writer = response.writer();
const writer = &response.writer;
try cbor.writeArrayHeader(writer, 4);
try cbor.writeValue(writer, sp_tag);
try cbor.writeValue(writer, self.tag);
@ -663,16 +672,16 @@ const Process = struct {
try cbor.writeValue(writer, "result");
_ = try writer.write(result_);
}
from.send_raw(.{ .buf = response.items }) catch return error.SendFailed;
from.send_raw(.{ .buf = response.written() }) catch return error.SendFailed;
}
fn receive_lsp_notification(self: *Process, method: []const u8, params: ?[]const u8) Error!void {
const json = if (params) |p| try cbor.toJsonPrettyAlloc(self.allocator, p) else null;
defer if (json) |p| self.allocator.free(p);
self.write_log("### RECV notify:\nmethod: {s}\n{s}\n###\n", .{ method, json orelse "no params" });
var notification = std.ArrayList(u8).init(self.allocator);
var notification: std.Io.Writer.Allocating = .init(self.allocator);
defer notification.deinit();
const writer = notification.writer();
const writer = &notification.writer;
try cbor.writeArrayHeader(writer, 6);
try cbor.writeValue(writer, sp_tag);
try cbor.writeValue(writer, self.project);
@ -680,13 +689,13 @@ const Process = struct {
try cbor.writeValue(writer, "notify");
try cbor.writeValue(writer, method);
if (params) |p| _ = try writer.write(p) else try cbor.writeValue(writer, null);
self.parent.send_raw(.{ .buf = notification.items }) catch return error.SendFailed;
self.parent.send_raw(.{ .buf = notification.written() }) catch return error.SendFailed;
}
fn write_log(self: *Process, comptime format: []const u8, args: anytype) void {
if (!debug_lsp) return;
const file = self.log_file orelse return;
file.writer().print(format, args) catch {};
const file_writer = if (self.log_file_writer) |*writer| writer else return;
file_writer.interface.print(format, args) catch {};
}
};

View file

@ -47,8 +47,8 @@ const OutOfMemoryError = error{OutOfMemory};
const SpawnError = (OutOfMemoryError || error{ThespianSpawnFailed});
pub const InvalidMessageError = error{ InvalidMessage, InvalidMessageField, InvalidTargetURI, InvalidMapType };
pub const StartLspError = (error{ ThespianSpawnFailed, Timeout, InvalidLspCommand } || LspError || OutOfMemoryError || cbor.Error);
pub const LspError = (error{ NoLsp, LspFailed } || OutOfMemoryError);
pub const ClientError = (error{ClientFailed} || OutOfMemoryError);
pub const LspError = (error{ NoLsp, LspFailed } || OutOfMemoryError || std.Io.Writer.Error);
pub const ClientError = (error{ClientFailed} || OutOfMemoryError || std.Io.Writer.Error);
pub const LspOrClientError = (LspError || ClientError);
const File = struct {
@ -80,7 +80,7 @@ pub fn init(allocator: std.mem.Allocator, name: []const u8) OutOfMemoryError!Sel
.open_time = std.time.milliTimestamp(),
.language_servers = std.StringHashMap(*const LSP).init(allocator),
.file_language_server = std.StringHashMap(*const LSP).init(allocator),
.tasks = std.ArrayList(Task).init(allocator),
.tasks = .empty,
.logger = log.logger("project"),
.logger_lsp = log.logger("lsp"),
.logger_git = log.logger("git"),
@ -104,7 +104,7 @@ pub fn deinit(self: *Self) void {
self.files.deinit(self.allocator);
self.pending.deinit(self.allocator);
for (self.tasks.items) |task| self.allocator.free(task.command);
self.tasks.deinit();
self.tasks.deinit(self.allocator);
self.logger_lsp.deinit();
self.logger_git.deinit();
self.logger.deinit();
@ -216,7 +216,7 @@ pub fn restore_state_v1(self: *Self, data: []const u8) !void {
continue;
}
tp.trace(tp.channel.debug, .{ "restore_state_v1", "task", command, mtime });
(try self.tasks.addOne()).* = .{
(try self.tasks.addOne(self.allocator)).* = .{
.command = try self.allocator.dupe(u8, command),
.mtime = mtime,
};
@ -237,6 +237,7 @@ pub fn restore_state_v0(self: *Self, data: []const u8) error{
BadArrayAllocExtract,
InvalidMapType,
InvalidUnion,
WriteFailed,
}!void {
tp.trace(tp.channel.debug, .{"restore_state_v0"});
defer self.sort_files_by_mtime();
@ -309,14 +310,16 @@ fn get_language_server(self: *Self, file_path: []const u8) LspError!*const LSP {
}
fn make_URI(self: *Self, file_path: ?[]const u8) LspError![]const u8 {
var buf = std.ArrayList(u8).init(self.allocator);
var buf: std.Io.Writer.Allocating = .init(self.allocator);
defer buf.deinit();
const writer = &buf.writer;
if (file_path) |path| {
if (std.fs.path.isAbsolute(path)) {
try buf.writer().print("file://{s}", .{path});
try writer.print("file://{s}", .{path});
} else {
try buf.writer().print("file://{s}{c}{s}", .{ self.name, std.fs.path.sep, path });
try writer.print("file://{s}{c}{s}", .{ self.name, std.fs.path.sep, path });
}
} else try buf.writer().print("file://{s}", .{self.name});
} else try writer.print("file://{s}", .{self.name});
return buf.toOwnedSlice();
}
@ -389,12 +392,12 @@ pub fn query_recent_files(self: *Self, from: tp.pid_ref, max: usize, query: []co
score: i32,
matches: []const usize,
};
var matches = std.ArrayList(Match).init(self.allocator);
var matches: std.ArrayList(Match) = .empty;
for (self.files.items) |file| {
const match = searcher.scoreMatches(file.path, query);
if (match.score) |score| {
(try matches.addOne()).* = .{
(try matches.addOne(self.allocator)).* = .{
.path = file.path,
.type = file.type,
.icon = file.icon,
@ -551,12 +554,13 @@ pub fn get_mru_position(self: *Self, from: tp.pid_ref, file_path: []const u8) Cl
}
pub fn request_tasks(self: *Self, from: tp.pid_ref) ClientError!void {
var message = std.ArrayList(u8).init(self.allocator);
const writer = message.writer();
var message: std.Io.Writer.Allocating = .init(self.allocator);
defer message.deinit();
const writer = &message.writer;
try cbor.writeArrayHeader(writer, self.tasks.items.len);
for (self.tasks.items) |task|
try cbor.writeValue(writer, task.command);
from.send_raw(.{ .buf = message.items }) catch return error.ClientFailed;
from.send_raw(.{ .buf = message.written() }) catch return error.ClientFailed;
}
pub fn add_task(self: *Self, command: []const u8) OutOfMemoryError!void {
@ -569,7 +573,7 @@ pub fn add_task(self: *Self, command: []const u8) OutOfMemoryError!void {
return;
};
tp.trace(tp.channel.debug, .{ "project", self.name, "add_task", command, mtime });
(try self.tasks.addOne()).* = .{
(try self.tasks.addOne(self.allocator)).* = .{
.command = try self.allocator.dupe(u8, command),
.mtime = mtime,
};
@ -615,8 +619,8 @@ pub fn did_change(self: *Self, file_path: []const u8, version: usize, text_dst:
}
var dizzy_edits = std.ArrayListUnmanaged(dizzy.Edit){};
var edits_cb = std.ArrayList(u8).init(arena);
const writer = edits_cb.writer();
var edits_cb: std.Io.Writer.Allocating = .init(self.allocator);
const writer = &edits_cb.writer;
const scratch_len = 4 * (text_dst.len + text_src.len) + 2;
const scratch = blk: {
@ -674,16 +678,17 @@ pub fn did_change(self: *Self, file_path: []const u8, version: usize, text_dst:
{
const frame = tracy.initZone(@src(), .{ .name = "send" });
defer frame.deinit();
var msg = std.ArrayList(u8).init(arena);
const msg_writer = msg.writer();
var msg: std.Io.Writer.Allocating = .init(self.allocator);
defer msg.deinit();
const msg_writer = &msg.writer;
try cbor.writeMapHeader(msg_writer, 2);
try cbor.writeValue(msg_writer, "textDocument");
try cbor.writeValue(msg_writer, .{ .uri = uri, .version = version });
try cbor.writeValue(msg_writer, "contentChanges");
try cbor.writeArrayHeader(msg_writer, edits_count);
_ = try msg_writer.write(edits_cb.items);
_ = try msg_writer.write(edits_cb.written());
lsp.send_notification_raw("textDocument/didChange", msg.items) catch return error.LspFailed;
lsp.send_notification_raw("textDocument/didChange", msg.written()) catch return error.LspFailed;
}
}
@ -1005,10 +1010,17 @@ fn send_completion_items(to: tp.pid_ref, file_path: []const u8, row: usize, col:
var item: []const u8 = "";
while (len > 0) : (len -= 1) {
if (!(try cbor.matchValue(&iter, cbor.extract_cbor(&item)))) return error.InvalidMessageField;
send_completion_item(to, file_path, row, col, item, if (len > 1) true else is_incomplete) catch return error.ClientFailed;
try send_completion_item(to, file_path, row, col, item, if (len > 1) true else is_incomplete);
}
}
fn invalid_field(field: []const u8) error{InvalidMessage} {
const logger = log.logger("lsp");
defer logger.deinit();
logger.print("invalid completion field '{s}'", .{field});
return error.InvalidMessage;
}
fn send_completion_item(to: tp.pid_ref, file_path: []const u8, row: usize, col: usize, item: []const u8, is_incomplete: bool) (ClientError || InvalidMessageError || cbor.Error)!void {
var label: []const u8 = "";
var label_detail: []const u8 = "";
@ -1029,53 +1041,53 @@ fn send_completion_item(to: tp.pid_ref, file_path: []const u8, row: usize, col:
var field_name: []const u8 = undefined;
if (!(try cbor.matchString(&iter, &field_name))) return error.InvalidMessage;
if (std.mem.eql(u8, field_name, "label")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&label)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&label)))) return invalid_field("label");
} else if (std.mem.eql(u8, field_name, "labelDetails")) {
var len_ = cbor.decodeMapHeader(&iter) catch return;
while (len_ > 0) : (len_ -= 1) {
if (!(try cbor.matchString(&iter, &field_name))) return error.InvalidMessage;
if (!(try cbor.matchString(&iter, &field_name))) return invalid_field("labelDetails");
if (std.mem.eql(u8, field_name, "detail")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&label_detail)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&label_detail)))) return invalid_field("labelDetails.detail");
} else if (std.mem.eql(u8, field_name, "description")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&label_description)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&label_description)))) return invalid_field("labelDetails.description");
} else {
try cbor.skipValue(&iter);
}
}
} else if (std.mem.eql(u8, field_name, "kind")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&kind)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&kind)))) return invalid_field("kind");
} else if (std.mem.eql(u8, field_name, "detail")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&detail)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&detail)))) return invalid_field("detail");
} else if (std.mem.eql(u8, field_name, "documentation")) {
var len_ = cbor.decodeMapHeader(&iter) catch return;
while (len_ > 0) : (len_ -= 1) {
if (!(try cbor.matchString(&iter, &field_name))) return error.InvalidMessage;
if (!(try cbor.matchString(&iter, &field_name))) return invalid_field("documentation");
if (std.mem.eql(u8, field_name, "kind")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&documentation_kind)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&documentation_kind)))) return invalid_field("documentation.kind");
} else if (std.mem.eql(u8, field_name, "value")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&documentation)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&documentation)))) return invalid_field("documentation.value");
} else {
try cbor.skipValue(&iter);
}
}
} else if (std.mem.eql(u8, field_name, "sortText")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&sortText)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&sortText)))) return invalid_field("sortText");
} else if (std.mem.eql(u8, field_name, "insertTextFormat")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&insertTextFormat)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&insertTextFormat)))) return invalid_field("insertTextFormat");
} else if (std.mem.eql(u8, field_name, "textEdit")) {
// var textEdit: []const u8 = ""; // { "newText": "wait_expired(${1:timeout_ns: isize})", "insert": Range, "replace": Range },
var len_ = cbor.decodeMapHeader(&iter) catch return;
while (len_ > 0) : (len_ -= 1) {
if (!(try cbor.matchString(&iter, &field_name))) return error.InvalidMessage;
if (!(try cbor.matchString(&iter, &field_name))) return invalid_field("textEdit");
if (std.mem.eql(u8, field_name, "newText")) {
if (!(try cbor.matchValue(&iter, cbor.extract(&textEdit_newText)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract(&textEdit_newText)))) return invalid_field("textEdit.newText");
} else if (std.mem.eql(u8, field_name, "insert")) {
var range_: []const u8 = undefined;
if (!(try cbor.matchValue(&iter, cbor.extract_cbor(&range_)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract_cbor(&range_)))) return invalid_field("textEdit.insert");
textEdit_insert = try read_range(range_);
} else if (std.mem.eql(u8, field_name, "replace")) {
var range_: []const u8 = undefined;
if (!(try cbor.matchValue(&iter, cbor.extract_cbor(&range_)))) return error.InvalidMessageField;
if (!(try cbor.matchValue(&iter, cbor.extract_cbor(&range_)))) return invalid_field("textEdit.replace");
textEdit_replace = try read_range(range_);
} else {
try cbor.skipValue(&iter);
@ -1085,8 +1097,8 @@ fn send_completion_item(to: tp.pid_ref, file_path: []const u8, row: usize, col:
try cbor.skipValue(&iter);
}
}
const insert = textEdit_insert orelse return error.InvalidMessageField;
const replace = textEdit_replace orelse return error.InvalidMessageField;
const insert = textEdit_insert orelse Range{ .start = .{ .line = 0, .character = 0 }, .end = .{ .line = 0, .character = 0 } };
const replace = textEdit_replace orelse Range{ .start = .{ .line = 0, .character = 0 }, .end = .{ .line = 0, .character = 0 } };
return to.send(.{
"cmd", "add_completion", .{
file_path,
@ -1139,16 +1151,16 @@ pub fn rename_symbol(self: *Self, from: tp.pid_ref, file_path: []const u8, row:
const allocator = std.heap.c_allocator;
var result: []const u8 = undefined;
// buffer the renames in order to send as a single, atomic message
var renames = std.ArrayList(Rename).init(allocator);
var renames = std.array_list.Managed(Rename).init(allocator);
defer renames.deinit();
if (try cbor.match(response.buf, .{ "child", tp.string, "result", tp.map })) {
if (try cbor.match(response.buf, .{ tp.any, tp.any, tp.any, tp.extract_cbor(&result) })) {
try decode_rename_symbol_map(result, &renames);
// write the renames message manually since there doesn't appear to be an array helper
var msg_buf = std.ArrayList(u8).init(allocator);
var msg_buf: std.Io.Writer.Allocating = .init(allocator);
defer msg_buf.deinit();
const w = msg_buf.writer();
const w = &msg_buf.writer;
try cbor.writeArrayHeader(w, 3);
try cbor.writeValue(w, "cmd");
try cbor.writeValue(w, "rename_symbol_item");
@ -1174,7 +1186,7 @@ pub fn rename_symbol(self: *Self, from: tp.pid_ref, file_path: []const u8, row:
line,
});
}
self_.from.send_raw(.{ .buf = msg_buf.items }) catch return error.ClientFailed;
self_.from.send_raw(.{ .buf = msg_buf.written() }) catch return error.ClientFailed;
}
}
}
@ -1192,7 +1204,7 @@ pub fn rename_symbol(self: *Self, from: tp.pid_ref, file_path: []const u8, row:
// decode a WorkspaceEdit record which may have shape {"changes": {}} or {"documentChanges": []}
// https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspaceEdit
fn decode_rename_symbol_map(result: []const u8, renames: *std.ArrayList(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
fn decode_rename_symbol_map(result: []const u8, renames: *std.array_list.Managed(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
var iter = result;
var len = cbor.decodeMapHeader(&iter) catch return error.InvalidMessage;
var changes: []const u8 = "";
@ -1214,7 +1226,7 @@ fn decode_rename_symbol_map(result: []const u8, renames: *std.ArrayList(Rename))
return error.ClientFailed;
}
fn decode_rename_symbol_changes(changes: []const u8, renames: *std.ArrayList(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
fn decode_rename_symbol_changes(changes: []const u8, renames: *std.array_list.Managed(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
var iter = changes;
var files_len = cbor.decodeMapHeader(&iter) catch return error.InvalidMessage;
while (files_len > 0) : (files_len -= 1) {
@ -1224,7 +1236,7 @@ fn decode_rename_symbol_changes(changes: []const u8, renames: *std.ArrayList(Ren
}
}
fn decode_rename_symbol_doc_changes(changes: []const u8, renames: *std.ArrayList(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
fn decode_rename_symbol_doc_changes(changes: []const u8, renames: *std.array_list.Managed(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
var iter = changes;
var changes_len = cbor.decodeArrayHeader(&iter) catch return error.InvalidMessage;
while (changes_len > 0) : (changes_len -= 1) {
@ -1251,7 +1263,7 @@ fn decode_rename_symbol_doc_changes(changes: []const u8, renames: *std.ArrayList
}
// https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textEdit
fn decode_rename_symbol_item(file_uri: []const u8, iter: *[]const u8, renames: *std.ArrayList(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
fn decode_rename_symbol_item(file_uri: []const u8, iter: *[]const u8, renames: *std.array_list.Managed(Rename)) (ClientError || InvalidMessageError || cbor.Error)!void {
var text_edits_len = cbor.decodeArrayHeader(iter) catch return error.InvalidMessage;
while (text_edits_len > 0) : (text_edits_len -= 1) {
var m_range: ?Range = null;
@ -1359,15 +1371,15 @@ fn send_contents(
};
if (is_list) {
var content = std.ArrayList(u8).init(std.heap.c_allocator);
var content: std.Io.Writer.Allocating = .init(std.heap.c_allocator);
defer content.deinit();
while (len > 0) : (len -= 1) {
if (try cbor.matchValue(&iter, cbor.extract(&value))) {
try content.appendSlice(value);
if (len > 1) try content.appendSlice("\n");
try content.writer.writeAll(value);
if (len > 1) try content.writer.writeAll("\n");
}
}
return send_content_msg(to, tag, file_path, row, col, kind, content.items, range);
return send_content_msg(to, tag, file_path, row, col, kind, content.written(), range);
}
while (len > 0) : (len -= 1) {
@ -1597,7 +1609,7 @@ fn send_lsp_init_request(self: *Self, lsp: *const LSP, project_path: []const u8,
pub fn receive(self_: @This(), _: tp.message) !void {
self_.lsp.send_notification("initialized", .{}) catch return error.LspFailed;
if (self_.lsp.pid.expired()) return error.LspFailed;
self_.project.logger_lsp.print("initialized LSP: {s}", .{fmt_lsp_name_func(self_.language_server)});
self_.project.logger_lsp.print("initialized LSP: {f}", .{fmt_lsp_name_func(self_.language_server)});
}
} = .{
.language_server = try std.heap.c_allocator.dupe(u8, language_server),
@ -1909,18 +1921,14 @@ fn send_lsp_init_request(self: *Self, lsp: *const LSP, project_path: []const u8,
}, handler);
}
fn fmt_lsp_name_func(bytes: []const u8) std.fmt.Formatter(format_lsp_name_func) {
fn fmt_lsp_name_func(bytes: []const u8) std.fmt.Formatter([]const u8, format_lsp_name_func) {
return .{ .data = bytes };
}
fn format_lsp_name_func(
bytes: []const u8,
comptime fmt: []const u8,
options: std.fmt.FormatOptions,
writer: anytype,
) !void {
_ = fmt;
_ = options;
writer: *std.Io.Writer,
) std.Io.Writer.Error!void {
var iter: []const u8 = bytes;
var len = cbor.decodeArrayHeader(&iter) catch return;
var first: bool = true;
@ -1935,7 +1943,7 @@ fn format_lsp_name_func(
const eol = '\n';
pub const GetLineOfFileError = (OutOfMemoryError || std.fs.File.OpenError || std.fs.File.Reader.Error);
pub const GetLineOfFileError = (OutOfMemoryError || std.fs.File.OpenError || std.fs.File.ReadError);
fn get_line_of_file(allocator: std.mem.Allocator, file_path: []const u8, line_: usize) GetLineOfFileError![]const u8 {
const line = line_ + 1;
@ -1944,7 +1952,7 @@ fn get_line_of_file(allocator: std.mem.Allocator, file_path: []const u8, line_:
const stat = try file.stat();
var buf = try allocator.alloc(u8, @intCast(stat.size));
defer allocator.free(buf);
const read_size = try file.reader().readAll(buf);
const read_size = try file.readAll(buf);
if (read_size != @as(@TypeOf(read_size), @intCast(stat.size)))
@panic("get_line_of_file: buffer underrun");

View file

@ -224,33 +224,33 @@ pub const Leaf = struct {
} else error.BufferUnderrun;
}
inline fn dump(self: *const Leaf, l: *ArrayList(u8), abs_col: usize, metrics: Metrics) !void {
inline fn dump(self: *const Leaf, l: *std.Io.Writer, abs_col: usize, metrics: Metrics) !void {
var buf: [16]u8 = undefined;
const wcwidth = try std.fmt.bufPrint(&buf, "{d}", .{self.width(abs_col, metrics)});
if (self.bol)
try l.appendSlice("BOL ");
try l.appendSlice(wcwidth);
try l.append('"');
try l.writeAll("BOL ");
try l.writeAll(wcwidth);
try l.writeAll("\"");
try debug_render_chunk(self.buf, l, metrics);
try l.appendSlice("\" ");
try l.writeAll("\" ");
if (self.eol)
try l.appendSlice("EOL ");
try l.writeAll("EOL ");
}
fn debug_render_chunk(chunk: []const u8, l: *ArrayList(u8), metrics: Metrics) !void {
fn debug_render_chunk(chunk: []const u8, l: *std.Io.Writer, metrics: Metrics) !void {
var cols: c_int = 0;
var buf = chunk;
while (buf.len > 0) {
switch (buf[0]) {
'\x00'...(' ' - 1) => {
const control = unicode.control_code_to_unicode(buf[0]);
try l.appendSlice(control);
try l.writeAll(control);
buf = buf[1..];
},
else => {
const bytes = metrics.egc_length(metrics, buf, &cols, 0);
var buf_: [4096]u8 = undefined;
try l.appendSlice(try std.fmt.bufPrint(&buf_, "{s}", .{std.fmt.fmtSliceEscapeLower(buf[0..bytes])}));
try l.writeAll(try std.fmt.bufPrint(&buf_, "{f}", .{std.ascii.hexEscape(buf[0..bytes], .lower)}));
buf = buf[bytes..];
},
}
@ -477,21 +477,21 @@ const Node = union(enum) {
}
}
fn debug_render_tree(self: *const Node, l: *ArrayList(u8), d: usize) void {
fn debug_render_tree(self: *const Node, l: *std.Io.Writer, d: usize) void {
switch (self.*) {
.node => |*node| {
l.append('(') catch {};
l.writeAll("(") catch {};
node.left.debug_render_tree(l, d + 1);
l.append(' ') catch {};
l.writeAll(" ") catch {};
node.right.debug_render_tree(l, d + 1);
l.append(')') catch {};
l.writeAll(")") catch {};
},
.leaf => |*leaf| {
l.append('"') catch {};
l.appendSlice(leaf.buf) catch {};
l.writeAll("\"") catch {};
l.writeAll(leaf.buf) catch {};
if (leaf.eol)
l.appendSlice("\\n") catch {};
l.append('"') catch {};
l.writeAll("\\n") catch {};
l.writeAll("\"") catch {};
},
}
}
@ -554,22 +554,23 @@ const Node = union(enum) {
return pred(egc);
}
pub fn get_line_width_map(self: *const Node, line: usize, map: *ArrayList(usize), metrics: Metrics) error{ Stop, NoSpaceLeft }!void {
pub fn get_line_width_map(self: *const Node, line: usize, map: *ArrayList(usize), allocator: Allocator, metrics: Metrics) error{ Stop, NoSpaceLeft }!void {
const Ctx = struct {
allocator: Allocator,
map: *ArrayList(usize),
wcwidth: usize = 0,
fn walker(ctx_: *anyopaque, egc: []const u8, wcwidth: usize, _: Metrics) Walker {
const ctx = @as(*@This(), @ptrCast(@alignCast(ctx_)));
var n = egc.len;
while (n > 0) : (n -= 1) {
const p = ctx.map.addOne() catch |e| return .{ .err = e };
const p = ctx.map.addOne(ctx.allocator) catch |e| return .{ .err = e };
p.* = ctx.wcwidth;
}
ctx.wcwidth += wcwidth;
return if (egc[0] == '\n') Walker.stop else Walker.keep_walking;
}
};
var ctx: Ctx = .{ .map = map };
var ctx: Ctx = .{ .allocator = allocator, .map = map };
self.walk_egc_forward(line, Ctx.walker, &ctx, metrics) catch |e| return switch (e) {
error.NoSpaceLeft => error.NoSpaceLeft,
else => error.Stop,
@ -794,6 +795,35 @@ const Node = union(enum) {
return if (found) ctx.result else error.NotFound;
}
pub fn byte_offset_to_line_and_col(self: *const Node, pos: usize, metrics: Metrics, eol_mode: EolMode) Cursor {
const ctx_ = struct {
pos: usize,
line: usize = 0,
col: usize = 0,
eol_mode: EolMode,
fn walker(ctx_: *anyopaque, egc: []const u8, wcwidth: usize, _: Metrics) Walker {
const ctx = @as(*@This(), @ptrCast(@alignCast(ctx_)));
if (egc[0] == '\n') {
ctx.pos -= switch (ctx.eol_mode) {
.lf => 1,
.crlf => @min(2, ctx.pos),
};
if (ctx.pos == 0) return Walker.stop;
ctx.line += 1;
ctx.col = 0;
} else {
ctx.pos -= @min(egc.len, ctx.pos);
if (ctx.pos == 0) return Walker.stop;
ctx.col += wcwidth;
}
return Walker.keep_walking;
}
};
var ctx: ctx_ = .{ .pos = pos + 1, .eol_mode = eol_mode };
self.walk_egc_forward(0, ctx_.walker, &ctx, metrics) catch {};
return .{ .row = ctx.line, .col = ctx.col };
}
pub fn insert_chars(
self_: *const Node,
line_: usize,
@ -897,7 +927,7 @@ const Node = union(enum) {
return .{ line, col, self };
}
pub fn store(self: *const Node, writer: anytype, eol_mode: EolMode) !void {
pub fn store(self: *const Node, writer: *std.Io.Writer, eol_mode: EolMode) !void {
switch (self.*) {
.node => |*node| {
try node.left.store(writer, eol_mode);
@ -914,7 +944,7 @@ const Node = union(enum) {
}
pub const FindAllCallback = fn (data: *anyopaque, begin_row: usize, begin_col: usize, end_row: usize, end_col: usize) error{Stop}!void;
pub fn find_all_ranges(self: *const Node, pattern: []const u8, data: *anyopaque, callback: *const FindAllCallback, allocator: Allocator) !void {
pub fn find_all_ranges(self: *const Node, pattern: []const u8, data: *anyopaque, callback: *const FindAllCallback, allocator: Allocator) error{ OutOfMemory, Stop }!void {
const Ctx = struct {
pattern: []const u8,
data: *anyopaque,
@ -923,9 +953,27 @@ const Node = union(enum) {
pos: usize = 0,
buf: []u8,
rest: []u8 = "",
writer: std.Io.Writer,
const Ctx = @This();
const Writer = std.io.Writer(*Ctx, error{Stop}, write);
fn write(ctx: *Ctx, bytes: []const u8) error{Stop}!usize {
fn drain(w: *std.Io.Writer, data_: []const []const u8, splat: usize) std.Io.Writer.Error!usize {
const ctx: *Ctx = @alignCast(@fieldParentPtr("writer", w));
std.debug.assert(splat == 0);
if (data_.len == 0) return 0;
var written: usize = 0;
for (data_[0 .. data_.len - 1]) |bytes| {
written += try ctx.write(bytes);
}
const pattern_ = data_[data_.len - 1];
switch (pattern_.len) {
0 => return written,
else => for (0..splat) |_| {
written += try ctx.write(pattern_);
},
}
return written;
}
fn write(ctx: *Ctx, bytes: []const u8) std.Io.Writer.Error!usize {
var input = bytes;
while (true) {
const input_consume_size = @min(ctx.buf.len - ctx.rest.len, input.len);
@ -946,7 +994,7 @@ const Node = union(enum) {
ctx.skip(&i, ctx.pattern.len);
const end_row = ctx.line + 1;
const end_pos = ctx.pos;
try ctx.callback(ctx.data, begin_row, begin_pos, end_row, end_pos);
ctx.callback(ctx.data, begin_row, begin_pos, end_row, end_pos) catch return error.WriteFailed;
} else {
ctx.skip(&i, 1);
}
@ -972,18 +1020,25 @@ const Node = union(enum) {
i.* += 1;
}
}
fn writer(ctx: *Ctx) Writer {
return .{ .context = ctx };
}
};
var ctx: Ctx = .{
.pattern = pattern,
.data = data,
.callback = callback,
.buf = try allocator.alloc(u8, pattern.len * 2),
.writer = .{
.vtable = &.{
.drain = Ctx.drain,
.flush = std.Io.Writer.noopFlush,
.rebase = std.Io.Writer.failingRebase,
},
.buffer = &.{},
},
};
defer allocator.free(ctx.buf);
return self.store(ctx.writer(), .lf);
return self.store(&ctx.writer, .lf) catch |e| switch (e) {
error.WriteFailed => error.Stop,
};
}
pub fn get_byte_pos(self: *const Node, pos_: Cursor, metrics_: Metrics, eol_mode: EolMode) !usize {
@ -1036,10 +1091,10 @@ const Node = union(enum) {
}
pub fn debug_render_chunks(self: *const Node, allocator: std.mem.Allocator, line: usize, metrics_: Metrics) ![]const u8 {
var output = std.ArrayList(u8).init(allocator);
var output: std.Io.Writer.Allocating = .init(allocator);
defer output.deinit();
const ctx_ = struct {
l: *ArrayList(u8),
l: *std.Io.Writer,
wcwidth: usize = 0,
fn walker(ctx_: *anyopaque, leaf: *const Leaf, metrics: Metrics) Walker {
const ctx = @as(*@This(), @ptrCast(@alignCast(ctx_)));
@ -1048,21 +1103,21 @@ const Node = union(enum) {
return if (!leaf.eol) Walker.keep_walking else Walker.stop;
}
};
var ctx: ctx_ = .{ .l = &output };
var ctx: ctx_ = .{ .l = &output.writer };
const found = self.walk_from_line_begin_const(line, ctx_.walker, &ctx, metrics_) catch true;
if (!found) return error.NotFound;
var buf: [16]u8 = undefined;
const wcwidth = try std.fmt.bufPrint(&buf, "{d}", .{ctx.wcwidth});
try output.appendSlice(wcwidth);
try output.writer.writeAll(wcwidth);
return output.toOwnedSlice();
}
pub fn debug_line_render_tree(self: *const Node, allocator: std.mem.Allocator, line: usize) ![]const u8 {
return if (self.find_line_node(line)) |n| blk: {
var l = std.ArrayList(u8).init(allocator);
var l: std.Io.Writer.Allocating = .init(allocator);
defer l.deinit();
n.debug_render_tree(&l, 0);
n.debug_render_tree(&l.writer, 0);
break :blk l.toOwnedSlice();
} else error.NotFound;
}
@ -1123,28 +1178,24 @@ fn new_file(self: *const Self, file_exists: *bool) error{OutOfMemory}!Root {
return Leaf.new(self.allocator, "", true, false);
}
pub fn LoadError(comptime reader_error: anytype) type {
return error{
pub const LoadError =
error{
OutOfMemory,
BufferUnderrun,
DanglingSurrogateHalf,
ExpectedSecondSurrogateHalf,
UnexpectedSecondSurrogateHalf,
Unexpected,
} || reader_error;
}
} || std.Io.Reader.Error;
pub fn load(self: *const Self, reader: anytype, size: usize, eol_mode: *EolMode, utf8_sanitized: *bool) LoadError(@TypeOf(reader).Error)!Root {
pub fn load(self: *const Self, reader: *std.Io.Reader, eol_mode: *EolMode, utf8_sanitized: *bool) LoadError!Root {
const lf = '\n';
const cr = '\r';
var buf = try self.external_allocator.alloc(u8, size);
const self_ = @constCast(self);
const read_size = try reader.readAll(buf);
if (read_size != size)
return error.BufferUnderrun;
const final_read = try reader.read(buf);
if (final_read != 0)
@panic("unexpected data in final read");
var read_buffer: ArrayList(u8) = .empty;
defer read_buffer.deinit(self.external_allocator);
try reader.appendRemainingUnlimited(self.external_allocator, &read_buffer);
var buf = try read_buffer.toOwnedSlice(self.external_allocator);
if (!std.unicode.utf8ValidateSlice(buf)) {
const converted = try unicode.utf8_sanitize(self.external_allocator, buf);
@ -1184,14 +1235,12 @@ pub fn load(self: *const Self, reader: anytype, size: usize, eol_mode: *EolMode,
return Node.merge_in_place(leaves, self.allocator);
}
pub const LoadFromStringError = LoadError(error{});
pub fn load_from_string(self: *const Self, s: []const u8, eol_mode: *EolMode, utf8_sanitized: *bool) LoadFromStringError!Root {
var stream = std.io.fixedBufferStream(s);
return self.load(stream.reader(), s.len, eol_mode, utf8_sanitized);
pub fn load_from_string(self: *const Self, s: []const u8, eol_mode: *EolMode, utf8_sanitized: *bool) LoadError!Root {
var reader = std.Io.Reader.fixed(s);
return self.load(&reader, eol_mode, utf8_sanitized);
}
pub fn load_from_string_and_update(self: *Self, file_path: []const u8, s: []const u8) LoadFromStringError!void {
pub fn load_from_string_and_update(self: *Self, file_path: []const u8, s: []const u8) LoadError!void {
self.root = try self.load_from_string(s, &self.file_eol_mode, &self.file_utf8_sanitized);
self.set_file_path(file_path);
self.last_save = self.root;
@ -1200,7 +1249,7 @@ pub fn load_from_string_and_update(self: *Self, file_path: []const u8, s: []cons
self.mtime = std.time.milliTimestamp();
}
pub fn reset_from_string_and_update(self: *Self, s: []const u8) LoadFromStringError!void {
pub fn reset_from_string_and_update(self: *Self, s: []const u8) LoadError!void {
self.root = try self.load_from_string(s, &self.file_eol_mode, &self.file_utf8_sanitized);
self.last_save = self.root;
self.last_save_eol_mode = self.file_eol_mode;
@ -1249,7 +1298,7 @@ pub const LoadFromFileError = error{
ProcessNotFound,
Canceled,
PermissionDenied,
};
} || LoadError;
pub fn load_from_file(
self: *const Self,
@ -1265,8 +1314,9 @@ pub fn load_from_file(
file_exists.* = true;
defer file.close();
const stat = try file.stat();
return self.load(file.reader(), @intCast(stat.size), eol_mode, utf8_sanitized);
var read_buf: [4096]u8 = undefined;
var file_reader = file.reader(&read_buf);
return self.load(&file_reader.interface, eol_mode, utf8_sanitized);
}
pub fn load_from_file_and_update(self: *Self, file_path: []const u8) LoadFromFileError!void {
@ -1303,16 +1353,9 @@ pub fn store_to_string(self: *const Self, allocator: Allocator, eol_mode: EolMod
return s.toOwnedSlice();
}
fn store_to_file_const(self: *const Self, file: anytype) StoreToFileError!void {
const buffer_size = 4096 * 16; // 64KB
const BufferedWriter = std.io.BufferedWriter(buffer_size, std.fs.File.Writer);
const Writer = std.io.Writer(*BufferedWriter, BufferedWriter.Error, BufferedWriter.write);
const file_writer: std.fs.File.Writer = file.writer();
var buffered_writer: BufferedWriter = .{ .unbuffered_writer = file_writer };
try self.root.store(Writer{ .context = &buffered_writer }, self.file_eol_mode);
try buffered_writer.flush();
fn store_to_file_const(self: *const Self, writer: *std.Io.Writer) StoreToFileError!void {
try self.root.store(writer, self.file_eol_mode);
try writer.flush();
}
pub const StoreToFileError = error{
@ -1356,13 +1399,15 @@ pub const StoreToFileError = error{
WouldBlock,
PermissionDenied,
MessageTooBig,
WriteFailed,
};
pub fn store_to_existing_file_const(self: *const Self, file_path: []const u8) StoreToFileError!void {
const stat = try cwd().statFile(file_path);
var atomic = try cwd().atomicFile(file_path, .{ .mode = stat.mode });
var write_buffer: [4096]u8 = undefined;
var atomic = try cwd().atomicFile(file_path, .{ .mode = stat.mode, .write_buffer = &write_buffer });
defer atomic.deinit();
try self.store_to_file_const(atomic.file);
try self.store_to_file_const(&atomic.file_writer.interface);
try atomic.finish();
}
@ -1371,7 +1416,9 @@ pub fn store_to_new_file_const(self: *const Self, file_path: []const u8) StoreTo
try cwd().makePath(dir_name);
const file = try cwd().createFile(file_path, .{ .read = true, .truncate = true });
defer file.close();
try self.store_to_file_const(file);
var write_buffer: [4096]u8 = undefined;
var writer = file.writer(&write_buffer);
try self.store_to_file_const(&writer.interface);
}
pub fn store_to_file_and_clean(self: *Self, file_path: []const u8) StoreToFileError!void {

View file

@ -37,7 +37,7 @@ pub fn open_file(self: *Self, file_path: []const u8) Buffer.LoadFromFileError!*B
return buffer;
}
pub fn open_scratch(self: *Self, file_path: []const u8, content: []const u8) Buffer.LoadFromStringError!*Buffer {
pub fn open_scratch(self: *Self, file_path: []const u8, content: []const u8) Buffer.LoadError!*Buffer {
const buffer = if (self.buffers.get(file_path)) |buffer| buffer else blk: {
var buffer = try Buffer.create(self.allocator);
errdefer buffer.deinit();

View file

@ -14,13 +14,13 @@ pub const Context = struct {
pub fn fmt(value: anytype) Context {
context_buffer.clearRetainingCapacity();
cbor.writeValue(context_buffer.writer(), value) catch @panic("command.Context.fmt failed");
return .{ .args = .{ .buf = context_buffer.items } };
cbor.writeValue(&context_buffer.writer, value) catch @panic("command.Context.fmt failed");
return .{ .args = .{ .buf = context_buffer.written() } };
}
};
const context_buffer_allocator = std.heap.c_allocator;
threadlocal var context_buffer: std.ArrayList(u8) = std.ArrayList(u8).init(context_buffer_allocator);
threadlocal var context_buffer: std.Io.Writer.Allocating = .init(context_buffer_allocator);
pub const fmt = Context.fmt;
const Vtable = struct {
@ -96,12 +96,12 @@ pub fn Closure(comptime T: type) type {
}
const CommandTable = std.ArrayList(?*Vtable);
pub var commands: CommandTable = CommandTable.init(command_table_allocator);
pub var commands: CommandTable = .empty;
var command_names: std.StringHashMap(ID) = std.StringHashMap(ID).init(command_table_allocator);
const command_table_allocator = std.heap.c_allocator;
fn addCommand(cmd: *Vtable) !ID {
try commands.append(cmd);
try commands.append(command_table_allocator, cmd);
return commands.items.len - 1;
}
@ -121,9 +121,9 @@ pub fn execute(id: ID, ctx: Context) tp.result {
if (len < 1) {
tp.trace(tp.channel.debug, .{ "command", "execute", id, get_name(id) });
} else {
var msg_cb = std.ArrayList(u8).init(command_table_allocator);
var msg_cb: std.Io.Writer.Allocating = .init(command_table_allocator);
defer msg_cb.deinit();
const writer = msg_cb.writer();
const writer = &msg_cb.writer;
cbor.writeArrayHeader(writer, 4 + len) catch break :trace;
cbor.writeValue(writer, "command") catch break :trace;
cbor.writeValue(writer, "execute") catch break :trace;
@ -132,9 +132,9 @@ pub fn execute(id: ID, ctx: Context) tp.result {
while (len > 0) : (len -= 1) {
var arg: []const u8 = undefined;
if (cbor.matchValue(&iter, cbor.extract_cbor(&arg)) catch break :trace)
msg_cb.appendSlice(arg) catch break :trace;
writer.writeAll(arg) catch break :trace;
}
const msg: tp.message = .{ .buf = msg_cb.items };
const msg: tp.message = .{ .buf = msg_cb.written() };
tp.trace(tp.channel.debug, msg);
}
}

View file

@ -122,7 +122,8 @@ pub fn diff(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]D
var dizzy_edits = std.ArrayListUnmanaged(dizzy.Edit){};
var scratch = std.ArrayListUnmanaged(u32){};
var diffs = std.ArrayList(Diff).init(allocator);
var diffs: std.ArrayList(Diff) = .empty;
errdefer diffs.deinit(allocator);
const scratch_len = 4 * (dst.len + src.len) + 2;
try scratch.ensureTotalCapacity(arena, scratch_len);
@ -131,7 +132,7 @@ pub fn diff(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]D
try dizzy.PrimitiveSliceDiffer(u8).diff(arena, &dizzy_edits, src, dst, scratch.items);
if (dizzy_edits.items.len > 2)
try diffs.ensureTotalCapacity((dizzy_edits.items.len - 1) / 2);
try diffs.ensureTotalCapacity(allocator, (dizzy_edits.items.len - 1) / 2);
var lines_dst: usize = 0;
var pos_src: usize = 0;
@ -152,7 +153,7 @@ pub fn diff(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]D
pos_dst += dist;
const line_start_dst: usize = lines_dst;
scan_char(dst[dizzy_edit.range.start..dizzy_edit.range.end], &lines_dst, '\n', null);
(try diffs.addOne()).* = .{
(try diffs.addOne(allocator)).* = .{
.kind = .insert,
.line = line_start_dst,
.offset = last_offset,
@ -165,7 +166,7 @@ pub fn diff(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]D
const dist = dizzy_edit.range.end - dizzy_edit.range.start;
pos_src += dist;
pos_dst += 0;
(try diffs.addOne()).* = .{
(try diffs.addOne(allocator)).* = .{
.kind = .delete,
.line = lines_dst,
.offset = last_offset,
@ -176,7 +177,7 @@ pub fn diff(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]D
},
}
}
return diffs.toOwnedSlice();
return diffs.toOwnedSlice(allocator);
}
pub fn get_edits(allocator: std.mem.Allocator, dst: []const u8, src: []const u8) ![]Edit {

View file

@ -13,6 +13,7 @@ pub const FileDest = struct {
column: ?usize = null,
end_column: ?usize = null,
exists: bool = false,
offset: ?usize = null,
};
pub const DirDest = struct {
@ -37,11 +38,17 @@ pub fn parse(link: []const u8) error{InvalidFileLink}!Dest {
.{ .file = .{ .path = it.first() } };
switch (dest) {
.file => |*file| {
if (it.next()) |line_|
if (it.next()) |line_| if (line_.len > 0 and line_[0] == 'b') {
file.offset = std.fmt.parseInt(usize, line_[1..], 10) catch blk: {
file.path = link;
break :blk null;
};
} else {
file.line = std.fmt.parseInt(usize, line_, 10) catch blk: {
file.path = link;
break :blk null;
};
};
if (file.line) |_| if (it.next()) |col_| {
file.column = std.fmt.parseInt(usize, col_, 10) catch null;
};
@ -88,6 +95,9 @@ pub fn parse_bracket_link(link: []const u8) error{InvalidFileLink}!Dest {
pub fn navigate(to: tp.pid_ref, link: *const Dest) anyerror!void {
switch (link.*) {
.file => |file| {
if (file.offset) |offset| {
return to.send(.{ "cmd", "navigate", .{ .file = file.path, .offset = offset } });
}
if (file.line) |l| {
if (file.column) |col| {
try to.send(.{ "cmd", "navigate", .{ .file = file.path, .line = l, .column = col } });

View file

@ -42,10 +42,10 @@ fn from_file_type(file_type: syntax.FileType) @This() {
pub fn get_default(allocator: std.mem.Allocator, file_type_name: []const u8) ![]const u8 {
const file_type = syntax.FileType.get_by_name_static(file_type_name) orelse return error.UnknownFileType;
const config = from_file_type(file_type);
var content = std.ArrayListUnmanaged(u8).empty;
defer content.deinit(allocator);
root.write_config_to_writer(@This(), config, content.writer(allocator)) catch {};
return content.toOwnedSlice(allocator);
var content: std.Io.Writer.Allocating = .init(allocator);
defer content.deinit();
root.write_config_to_writer(@This(), config, &content.writer) catch {};
return content.toOwnedSlice();
}
pub fn get_all_names() []const []const u8 {
@ -93,21 +93,23 @@ pub fn get(file_type_name: []const u8) !?@This() {
}
pub fn get_config_file_path(allocator: std.mem.Allocator, file_type: []const u8) ![]u8 {
var stream = std.ArrayList(u8).fromOwnedSlice(allocator, try get_config_dir_path(allocator));
const writer = stream.writer();
var stream: std.Io.Writer.Allocating = .initOwnedSlice(allocator, try get_config_dir_path(allocator));
defer stream.deinit();
const writer = &stream.writer;
_ = try writer.writeAll(file_type);
_ = try writer.writeAll(".conf");
return stream.toOwnedSlice();
}
fn get_config_dir_path(allocator: std.mem.Allocator) ![]u8 {
var stream = std.ArrayList(u8).init(allocator);
const writer = stream.writer();
var stream: std.Io.Writer.Allocating = .init(allocator);
defer stream.deinit();
const writer = &stream.writer;
_ = try writer.writeAll(try root.get_config_dir());
_ = try writer.writeByte(std.fs.path.sep);
_ = try writer.writeAll("file_type");
_ = try writer.writeByte(std.fs.path.sep);
std.fs.makeDirAbsolute(stream.items) catch |e| switch (e) {
std.fs.makeDirAbsolute(stream.written()) catch |e| switch (e) {
error.PathAlreadyExists => {},
else => return e,
};

View file

@ -3,7 +3,7 @@ const tp = @import("thespian");
const shell = @import("shell");
const bin_path = @import("bin_path");
pub const Error = error{ OutOfMemory, GitNotFound, GitCallFailed };
pub const Error = error{ OutOfMemory, GitNotFound, GitCallFailed, WriteFailed };
const log_execute = false;
@ -208,15 +208,16 @@ fn git_err(
) Error!void {
const cbor = @import("cbor");
const git_binary = get_git() orelse return error.GitNotFound;
var buf: std.ArrayListUnmanaged(u8) = .empty;
const writer = buf.writer(allocator);
var buf: std.Io.Writer.Allocating = .init(allocator);
defer buf.deinit();
const writer = &buf.writer;
switch (@typeInfo(@TypeOf(cmd))) {
.@"struct" => |info| if (info.is_tuple) {
try cbor.writeArrayHeader(writer, info.fields.len + 1);
try cbor.writeValue(writer, git_binary);
inline for (info.fields) |f|
try cbor.writeValue(writer, @field(cmd, f.name));
return shell.execute(allocator, .{ .buf = buf.items }, .{
return shell.execute(allocator, .{ .buf = buf.written() }, .{
.context = context,
.out = to_shell_output_handler(out),
.err = to_shell_output_handler(err),

View file

@ -77,12 +77,20 @@
["ctrl+enter", "smart_insert_line_after"],
["ctrl+end", "move_buffer_end"],
["ctrl+home", "move_buffer_begin"],
["ctrl+kp_end", "move_buffer_end"],
["ctrl+kp_home", "move_buffer_begin"],
["ctrl+up", "move_scroll_up"],
["ctrl+down", "move_scroll_down"],
["ctrl+kp_up", "move_scroll_up"],
["ctrl+kp_down", "move_scroll_down"],
["ctrl+page_up", "move_scroll_page_up"],
["ctrl+page_down", "move_scroll_page_down"],
["ctrl+kp_page_up", "move_scroll_page_up"],
["ctrl+kp_page_down", "move_scroll_page_down"],
["ctrl+left", "move_word_left"],
["ctrl+right", "move_word_right"],
["ctrl+kp_left", "move_word_left"],
["ctrl+kp_right", "move_word_right"],
["ctrl+backspace", "delete_word_left"],
["ctrl+delete", "delete_word_right"],
["ctrl+f5", "toggle_inspector_view"],
@ -98,10 +106,16 @@
["ctrl+shift+enter", "smart_insert_line_before"],
["ctrl+shift+end", "select_buffer_end"],
["ctrl+shift+home", "select_buffer_begin"],
["ctrl+shift+kp_end", "select_buffer_end"],
["ctrl+shift+kp_home", "select_buffer_begin"],
["ctrl+shift+up", "select_scroll_up"],
["ctrl+shift+down", "select_scroll_down"],
["ctrl+shift+kp_up", "select_scroll_up"],
["ctrl+shift+kp_down", "select_scroll_down"],
["ctrl+shift+left", "select_word_left"],
["ctrl+shift+right", "select_word_right"],
["ctrl+shift+kp_left", "select_word_left"],
["ctrl+shift+kp_right", "select_word_right"],
["ctrl+shift+space", "selections_reverse"],
["alt+o", "open_previous_file"],
["alt+j", "join_next_line"],
@ -117,8 +131,12 @@
["alt+R", ["shell_execute_insert", "openssl", "rand", "-hex", "4"]],
["alt+left", "jump_back"],
["alt+right", "jump_forward"],
["alt+kp_left", "jump_back"],
["alt+kp_right", "jump_forward"],
["alt+up", "pull_up"],
["alt+down", "pull_down"],
["alt+kp_up", "pull_up"],
["alt+kp_down", "pull_down"],
["alt+enter", "insert_line"],
["alt+f10", "gutter_mode_next"],
["alt+shift+f10", "gutter_style_next"],
@ -128,29 +146,49 @@
["alt+shift+s", "filter", "sort", "-u"],
["alt+shift+v", "paste"],
["alt+shift+i", "add_cursors_to_line_ends"],
["alt+shift+left", "shrink_selection"],
["alt+shift+right", "expand_selection"],
["alt+shift+left", "expand_selection"],
["alt+shift+right", "shrink_selection"],
["alt+shift+kp_left", "expand_selection"],
["alt+shift+kp_right", "shrink_selection"],
["alt+home", "select_prev_sibling"],
["alt+end", "select_next_sibling"],
["alt+kp_home", "select_prev_sibling"],
["alt+kp_end", "select_next_sibling"],
["alt+{", "expand_selection"],
["alt+}", "shrink_selection", true],
["alt+[", "select_prev_sibling", true],
["alt+]", "select_next_sibling", true],
["alt+shift+e", "move_parent_node_end"],
["alt+shift+b", "move_parent_node_start"],
["alt+a", "select_all_siblings"],
["alt+shift+home", "move_scroll_left"],
["alt+shift+end", "move_scroll_right"],
["alt+shift+kp_home", "move_scroll_left"],
["alt+shift+kp_end", "move_scroll_right"],
["alt+shift+up", "add_cursor_up"],
["alt+shift+down", "add_cursor_down"],
["alt+shift+kp_up", "add_cursor_up"],
["alt+shift+kp_down", "add_cursor_down"],
["alt+shift+f12", "goto_type_definition"],
["shift+f3", "goto_prev_match"],
["shift+f10", "toggle_syntax_highlighting"],
["shift+f12", "references"],
["shift+left", "select_left"],
["shift+right", "select_right"],
["shift+kp_left", "select_left"],
["shift+kp_right", "select_right"],
["shift+up", "select_up"],
["shift+down", "select_down"],
["shift+kp_up", "select_up"],
["shift+kp_down", "select_down"],
["shift+home", "smart_select_begin"],
["shift+end", "select_end"],
["shift+kp_home", "smart_select_begin"],
["shift+kp_end", "select_end"],
["shift+page_up", "select_page_up"],
["shift+page_down", "select_page_down"],
["shift+kp_page_up", "select_page_up"],
["shift+kp_page_down", "select_page_down"],
["shift+enter", "smart_insert_line_before"],
["shift+backspace", "delete_backward"],
["shift+tab", "unindent"],
@ -173,12 +211,20 @@
["backspace", "smart_delete_backward"],
["left", "move_left"],
["right", "move_right"],
["kp_left", "move_left"],
["kp_right", "move_right"],
["up", "move_up"],
["down", "move_down"],
["kp_up", "move_up"],
["kp_down", "move_down"],
["home", "smart_move_begin"],
["end", "move_end"],
["kp_home", "smart_move_begin"],
["kp_end", "move_end"],
["page_up", "move_page_up"],
["page_down", "move_page_down"],
["kp_page_up", "move_page_up"],
["kp_page_down", "move_page_down"],
["tab", "indent"],
["ctrl+space", "enter_mode", "select"],
@ -231,16 +277,30 @@
["right", "select_right"],
["ctrl+left", "select_word_left"],
["ctrl+right", "select_word_right"],
["kp_left", "select_left"],
["kp_right", "select_right"],
["ctrl+kp_left", "select_word_left"],
["ctrl+kp_right", "select_word_right"],
["up", "select_up"],
["down", "select_down"],
["kp_up", "select_up"],
["kp_down", "select_down"],
["home", "select_begin"],
["end", "select_end"],
["kp_home", "select_begin"],
["kp_end", "select_end"],
["ctrl+home", "select_buffer_begin"],
["ctrl+end", "select_buffer_end"],
["ctrl+kp_home", "select_buffer_begin"],
["ctrl+kp_end", "select_buffer_end"],
["page_up", "select_page_up"],
["page_down", "select_page_down"],
["ctrl+page_up", "select_scroll_page_up"],
["ctrl+page_down", "select_scroll_page_down"],
["kp_page_up", "select_page_up"],
["kp_page_down", "select_page_down"],
["ctrl+kp_page_up", "select_scroll_page_up"],
["ctrl+kp_page_down", "select_scroll_page_down"],
["ctrl+b", "move_to_char", "select_to_char_left"],
["ctrl+t", "move_to_char", "select_to_char_right"],
["ctrl+space", "enter_mode", "normal"],
@ -282,6 +342,8 @@
["q", "quit"],
["up", "home_menu_up"],
["down", "home_menu_down"],
["kp_up", "home_menu_up"],
["kp_down", "home_menu_down"],
["enter", "home_menu_activate"]
]
},
@ -304,8 +366,12 @@
["ctrl+escape", "palette_menu_cancel"],
["ctrl+up", "palette_menu_up"],
["ctrl+down", "palette_menu_down"],
["ctrl+kp_up", "palette_menu_up"],
["ctrl+kp_down", "palette_menu_down"],
["ctrl+page_up", "palette_menu_pageup"],
["ctrl+page_down", "palette_menu_pagedown"],
["ctrl+kp_page_up", "palette_menu_pageup"],
["ctrl+kp_page_down", "palette_menu_pagedown"],
["ctrl+enter", "palette_menu_activate"],
["ctrl+backspace", "overlay_delete_word_left"],
["ctrl+shift+e", "palette_menu_up"],
@ -326,10 +392,16 @@
["escape", "palette_menu_cancel"],
["up", "palette_menu_up"],
["down", "palette_menu_down"],
["kp_up", "palette_menu_up"],
["kp_down", "palette_menu_down"],
["page_up", "palette_menu_pageup"],
["page_down", "palette_menu_pagedown"],
["kp_page_up", "palette_menu_pageup"],
["kp_page_down", "palette_menu_pagedown"],
["home", "palette_menu_top"],
["end", "palette_menu_bottom"],
["kp_home", "palette_menu_top"],
["kp_end", "palette_menu_bottom"],
["enter", "palette_menu_activate"],
["delete", "palette_menu_delete_item"],
["backspace", "overlay_delete_backwards"]
@ -341,6 +413,7 @@
},
"mini/numeric": {
"press": [
["b", "goto_offset"],
["ctrl+q", "quit"],
["ctrl+v", "system_paste"],
["ctrl+u", "mini_mode_reset"],
@ -399,8 +472,12 @@
["shift+tab", "mini_mode_reverse_complete_file"],
["up", "mini_mode_reverse_complete_file"],
["down", "mini_mode_try_complete_file"],
["right", "mini_mode_try_complete_file_forward"],
["kp_up", "mini_mode_reverse_complete_file"],
["kp_down", "mini_mode_try_complete_file"],
["left", "mini_mode_delete_to_previous_path_segment"],
["right", "mini_mode_try_complete_file_forward"],
["kp_left", "mini_mode_delete_to_previous_path_segment"],
["kp_right", "mini_mode_try_complete_file_forward"],
["tab", "mini_mode_try_complete_file"],
["escape", "mini_mode_cancel"],
["enter", "mini_mode_select"],
@ -430,6 +507,8 @@
["shift+f3", "goto_prev_match"],
["up", "select_prev_file"],
["down", "select_next_file"],
["kp_up", "select_prev_file"],
["kp_down", "select_next_file"],
["f3", "goto_next_match"],
["f15", "goto_prev_match"],
["f9", "theme_prev"],
@ -462,6 +541,8 @@
["shift+f3", "goto_prev_match"],
["up", "mini_mode_history_prev"],
["down", "mini_mode_history_next"],
["kp_up", "mini_mode_history_prev"],
["kp_down", "mini_mode_history_next"],
["f3", "goto_next_match"],
["f15", "goto_prev_match"],
["f9", "theme_prev"],

View file

@ -111,6 +111,8 @@
["home", "move_begin"],
["end", "move_end"],
["kp_home", "move_begin"],
["kp_end", "move_end"],
["w","move_next_word_start"],
["b","move_prev_word_start"],
@ -201,6 +203,8 @@
["page_up", "move_scroll_page_up"],
["page_down", "move_scroll_page_down"],
["kp_page_up", "move_scroll_page_up"],
["kp_page_down", "move_scroll_page_down"],
["space F", "find_file"],
["space S", "workspace_symbol_picker"],
@ -293,6 +297,10 @@
["alt+down", "shrink_selection"],
["alt+left", "select_prev_sibling"],
["alt+right", "select_next_sibling"],
["alt+kp_up", "expand_selection"],
["alt+kp_down", "shrink_selection"],
["alt+kp_left", "select_prev_sibling"],
["alt+kp_right", "select_next_sibling"],
["alt+e", "extend_parent_node_end"],
["alt+b", "extend_parent_node_start"],
@ -378,6 +386,10 @@
["down", "select_down"],
["up", "select_up"],
["right", "select_right"],
["kp_left", "select_left"],
["kp_down", "select_down"],
["kp_up", "select_up"],
["kp_right", "select_right"],
["t", "extend_till_char"],
["f", "move_to_char", "select_to_char_right_helix"],
@ -386,6 +398,8 @@
["home", "extend_to_line_start"],
["end", "extend_to_line_end"],
["kp_home", "extend_to_line_start"],
["kp_end", "extend_to_line_end"],
["w", "extend_next_word_start"],
["b", "extend_pre_word_start"],

View file

@ -137,11 +137,11 @@ pub fn get_namespaces(allocator: std.mem.Allocator) ![]const []const u8 {
for (namespaces) |namespace| allocator.free(namespace);
allocator.free(namespaces);
}
var result = std.ArrayList([]const u8).init(allocator);
try result.append(try allocator.dupe(u8, "flow"));
try result.append(try allocator.dupe(u8, "emacs"));
try result.append(try allocator.dupe(u8, "vim"));
try result.append(try allocator.dupe(u8, "helix"));
var result: std.ArrayList([]const u8) = .empty;
try result.append(allocator, try allocator.dupe(u8, "flow"));
try result.append(allocator, try allocator.dupe(u8, "emacs"));
try result.append(allocator, try allocator.dupe(u8, "vim"));
try result.append(allocator, try allocator.dupe(u8, "helix"));
for (namespaces) |namespace| {
var exists = false;
for (result.items) |existing|
@ -150,9 +150,9 @@ pub fn get_namespaces(allocator: std.mem.Allocator) ![]const []const u8 {
break;
};
if (!exists)
try result.append(try allocator.dupe(u8, namespace));
try result.append(allocator, try allocator.dupe(u8, namespace));
}
return result.toOwnedSlice();
return result.toOwnedSlice(allocator);
}
pub fn get_namespace() []const u8 {
@ -198,7 +198,7 @@ fn get_mode_binding_set(mode_name: []const u8, insert_command: []const u8) LoadE
return binding_set;
}
pub const LoadError = (error{ NotFound, NotAnObject } || std.json.ParseError(std.json.Scanner) || parse_flow.ParseError || parse_vim.ParseError || std.json.ParseFromValueError);
pub const LoadError = (error{ NotFound, NotAnObject, WriteFailed } || std.json.ParseError(std.json.Scanner) || parse_flow.ParseError || parse_vim.ParseError || std.json.ParseFromValueError);
///A collection of modes that represent a switchable editor emulation
const Namespace = struct {
@ -320,7 +320,7 @@ const Command = struct {
return args.len == 1 and args[0] == .integer;
}
fn load(allocator: std.mem.Allocator, tokens: []const std.json.Value) (parse_flow.ParseError || parse_vim.ParseError)!Command {
fn load(allocator: std.mem.Allocator, tokens: []const std.json.Value) (error{WriteFailed} || parse_flow.ParseError || parse_vim.ParseError)!Command {
if (tokens.len == 0) return error.InvalidFormat;
var state: enum { command, args } = .command;
var args = std.ArrayListUnmanaged(std.json.Value){};
@ -343,11 +343,10 @@ const Command = struct {
switch (token) {
.string, .integer, .float, .bool => {},
else => {
var json = std.ArrayList(u8).init(allocator);
defer json.deinit();
std.json.stringify(token, .{}, json.writer()) catch {};
const json = try std.json.Stringify.valueAlloc(allocator, token, .{});
defer allocator.free(json);
const logger = log.logger("keybind");
logger.print_err("keybind.load", "ERROR: invalid command argument '{s}'", .{json.items});
logger.print_err("keybind.load", "ERROR: invalid command argument '{s}'", .{json});
logger.deinit();
return error.InvalidFormat;
},
@ -357,14 +356,14 @@ const Command = struct {
}
}
var args_cbor = std.ArrayListUnmanaged(u8){};
defer args_cbor.deinit(allocator);
const writer = args_cbor.writer(allocator);
var args_cbor: std.Io.Writer.Allocating = .init(allocator);
defer args_cbor.deinit();
const writer = &args_cbor.writer;
try cbor.writeArrayHeader(writer, args.items.len);
for (args.items) |arg| try cbor.writeJsonValue(writer, arg);
return .{
.command = command_,
.args = try args_cbor.toOwnedSlice(allocator),
.args = try args_cbor.toOwnedSlice(),
};
}
};
@ -426,7 +425,7 @@ const BindingSet = struct {
const KeySyntax = enum { flow, vim };
const OnMatchFailure = enum { insert, ignore };
fn load(allocator: std.mem.Allocator, namespace_name: []const u8, mode_bindings: std.json.Value, fallback: ?*const BindingSet, namespace: *Namespace) (error{OutOfMemory} || parse_flow.ParseError || parse_vim.ParseError || std.json.ParseFromValueError)!@This() {
fn load(allocator: std.mem.Allocator, namespace_name: []const u8, mode_bindings: std.json.Value, fallback: ?*const BindingSet, namespace: *Namespace) (error{ OutOfMemory, WriteFailed } || parse_flow.ParseError || parse_vim.ParseError || std.json.ParseFromValueError)!@This() {
var self: @This() = .{ .name = undefined, .selection_style = undefined };
const JsonConfig = struct {
@ -475,7 +474,7 @@ const BindingSet = struct {
return self;
}
fn load_event(self: *BindingSet, allocator: std.mem.Allocator, dest: *std.ArrayListUnmanaged(Binding), event: input.Event, bindings: []const []const std.json.Value) (parse_flow.ParseError || parse_vim.ParseError)!void {
fn load_event(self: *BindingSet, allocator: std.mem.Allocator, dest: *std.ArrayListUnmanaged(Binding), event: input.Event, bindings: []const []const std.json.Value) (error{WriteFailed} || parse_flow.ParseError || parse_vim.ParseError)!void {
_ = event;
bindings: for (bindings) |entry| {
if (entry.len < 2) {
@ -509,27 +508,26 @@ const BindingSet = struct {
errdefer allocator.free(key_events);
const cmd = entry[1];
var cmds = std.ArrayList(Command).init(allocator);
defer cmds.deinit();
var cmds: std.ArrayList(Command) = .empty;
defer cmds.deinit(allocator);
if (cmd == .string) {
try cmds.append(try Command.load(allocator, entry[1..]));
try cmds.append(allocator, try Command.load(allocator, entry[1..]));
} else {
for (entry[1..]) |cmd_entry| {
if (cmd_entry != .array) {
var json = std.ArrayList(u8).init(allocator);
defer json.deinit();
std.json.stringify(cmd_entry, .{}, json.writer()) catch {};
const json = try std.json.Stringify.valueAlloc(allocator, cmd_entry, .{});
defer allocator.free(json);
const logger = log.logger("keybind");
logger.print_err("keybind.load", "ERROR: invalid command definition {s}", .{json.items});
logger.print_err("keybind.load", "ERROR: invalid command definition {s}", .{json});
logger.deinit();
continue :bindings;
}
try cmds.append(try Command.load(allocator, cmd_entry.array.items));
try cmds.append(allocator, try Command.load(allocator, cmd_entry.array.items));
}
}
try dest.append(allocator, .{
.key_events = key_events,
.commands = try cmds.toOwnedSlice(),
.commands = try cmds.toOwnedSlice(allocator),
});
}
}
@ -564,13 +562,13 @@ const BindingSet = struct {
for (self.press.items) |binding| {
const cmd = binding.commands[0].command;
var hint = if (hints_map.get(cmd)) |previous|
std.ArrayList(u8).fromOwnedSlice(allocator, previous)
var hint: std.Io.Writer.Allocating = if (hints_map.get(cmd)) |previous|
.initOwnedSlice(allocator, previous)
else
std.ArrayList(u8).init(allocator);
.init(allocator);
defer hint.deinit();
const writer = hint.writer();
if (hint.items.len > 0) try writer.writeAll(", ");
const writer = &hint.writer;
if (hint.written().len > 0) try writer.writeAll(", ");
const count = binding.key_events.len;
for (binding.key_events, 0..) |key_, n| {
var key = key_;
@ -578,7 +576,7 @@ const BindingSet = struct {
switch (self.syntax) {
// .flow => {
else => {
try writer.print("{}", .{key});
try writer.print("{f}", .{key});
if (n < count - 1)
try writer.writeAll(" ");
},

View file

@ -21,7 +21,7 @@ fn parse_error(comptime format: anytype, args: anytype) ParseError {
pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseError![]input.KeyEvent {
parse_error_reset();
if (str.len == 0) return parse_error("empty", .{});
var result_events = std.ArrayList(input.KeyEvent).init(allocator);
var result_events: std.ArrayList(input.KeyEvent) = .empty;
var iter_sequence = std.mem.tokenizeScalar(u8, str, ' ');
while (iter_sequence.next()) |item| {
var key: ?input.Key = null;
@ -65,11 +65,11 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
if (key == null) return parse_error("unknown key '{s}' in '{s}'", .{ part, str });
}
if (key) |k|
try result_events.append(input.KeyEvent.from_key_modset(k, mods))
try result_events.append(allocator, input.KeyEvent.from_key_modset(k, mods))
else
return parse_error("no key defined in '{s}'", .{str});
}
return result_events.toOwnedSlice();
return result_events.toOwnedSlice(allocator);
}
pub const name_map = blk: {

View file

@ -81,8 +81,8 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
var state: State = .base;
var function_key_number: u8 = 0;
var modifiers: input.Mods = 0;
var result = std.ArrayList(input.KeyEvent).init(allocator);
defer result.deinit();
var result: std.ArrayList(input.KeyEvent) = .empty;
defer result.deinit(allocator);
var i: usize = 0;
while (i < str.len) {
@ -100,7 +100,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
'0'...'9',
'!', '@', '#', '$', '%', '^', '&', '*', '(', ')',
'`', '~', '-', '_', '=', '+', '[', ']', '{', '}', '\\', '|', ':', ';', '\'', '"', ',', '.', '/', '?', => {
try result.append(from_key(str[i]));
try result.append(allocator, from_key(str[i]));
i += 1;
},
else => return parse_error(error.InvalidInitialCharacter, "str: {s}, i: {} c: {c}", .{ str, i, str[i] }),
@ -216,7 +216,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.insert => {
if (std.mem.indexOf(u8, str[i..], "Insert") == 0) {
try result.append(from_key_mods(input.key.insert, modifiers));
try result.append(allocator, from_key_mods(input.key.insert, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 4;
@ -224,7 +224,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.end => {
if (std.mem.indexOf(u8, str[i..], "End") == 0) {
try result.append(from_key_mods(input.key.end, modifiers));
try result.append(allocator, from_key_mods(input.key.end, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 3;
@ -232,7 +232,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.home => {
if (std.mem.indexOf(u8, str[i..], "Home") == 0) {
try result.append(from_key_mods(input.key.home, modifiers));
try result.append(allocator, from_key_mods(input.key.home, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 4;
@ -240,7 +240,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.bs => {
if (std.mem.indexOf(u8, str[i..], "BS") == 0) {
try result.append(from_key_mods(input.key.backspace, modifiers));
try result.append(allocator, from_key_mods(input.key.backspace, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 2;
@ -248,7 +248,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.cr => {
if (std.mem.indexOf(u8, str[i..], "CR") == 0) {
try result.append(from_key_mods(input.key.enter, modifiers));
try result.append(allocator, from_key_mods(input.key.enter, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 2;
@ -256,7 +256,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.space => {
if (std.mem.indexOf(u8, str[i..], "Space") == 0) {
try result.append(from_key_mods(input.key.space, modifiers));
try result.append(allocator, from_key_mods(input.key.space, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 5;
@ -264,7 +264,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.del => {
if (std.mem.indexOf(u8, str[i..], "Del") == 0) {
try result.append(from_key_mods(input.key.delete, modifiers));
try result.append(allocator, from_key_mods(input.key.delete, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 3;
@ -272,7 +272,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.tab => {
if (std.mem.indexOf(u8, str[i..], "Tab") == 0) {
try result.append(from_key_mods(input.key.tab, modifiers));
try result.append(allocator, from_key_mods(input.key.tab, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 3;
@ -280,7 +280,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.up => {
if (std.mem.indexOf(u8, str[i..], "Up") == 0) {
try result.append(from_key_mods(input.key.up, modifiers));
try result.append(allocator, from_key_mods(input.key.up, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 2;
@ -288,7 +288,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.esc => {
if (std.mem.indexOf(u8, str[i..], "Esc") == 0) {
try result.append(from_key_mods(input.key.escape, modifiers));
try result.append(allocator, from_key_mods(input.key.escape, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 3;
@ -296,7 +296,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.down => {
if (std.mem.indexOf(u8, str[i..], "Down") == 0) {
try result.append(from_key_mods(input.key.down, modifiers));
try result.append(allocator, from_key_mods(input.key.down, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 4;
@ -304,7 +304,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.left => {
if (std.mem.indexOf(u8, str[i..], "Left") == 0) {
try result.append(from_key_mods(input.key.left, modifiers));
try result.append(allocator, from_key_mods(input.key.left, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 4;
@ -312,7 +312,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.right => {
if (std.mem.indexOf(u8, str[i..], "Right") == 0) {
try result.append(from_key_mods(input.key.right, modifiers));
try result.append(allocator, from_key_mods(input.key.right, modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 5;
@ -320,7 +320,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.less_than => {
if (std.mem.indexOf(u8, str[i..], "LT") == 0) {
try result.append(from_key_mods('<', modifiers));
try result.append(allocator, from_key_mods('<', modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 2;
@ -328,7 +328,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
.greater_than => {
if (std.mem.indexOf(u8, str[i..], "GT") == 0) {
try result.append(from_key_mods('>', modifiers));
try result.append(allocator, from_key_mods('>', modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 2;
@ -345,7 +345,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
'>' => {
const function_key = input.key.f1 - 1 + function_key_number;
try result.append(from_key_mods(function_key, modifiers));
try result.append(allocator, from_key_mods(function_key, modifiers));
modifiers = 0;
function_key_number = 0;
state = .base;
@ -371,7 +371,7 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
'0'...'9',
'`', '-', '=', '[', ']', '\\', ':', ';', '\'', ',', '.', '/',
=> {
try result.append(from_key_mods(str[i], modifiers));
try result.append(allocator, from_key_mods(str[i], modifiers));
modifiers = 0;
state = .escape_sequence_end;
i += 1;
@ -405,5 +405,5 @@ pub fn parse_key_events(allocator: std.mem.Allocator, str: []const u8) ParseErro
},
}
}
return result.toOwnedSlice();
return result.toOwnedSlice(allocator);
}

View file

@ -50,8 +50,8 @@ const Process = struct {
self.* = .{
.arena = std.heap.ArenaAllocator.init(outer_a),
.allocator = self.arena.allocator(),
.backwards = std.ArrayList(Entry).init(self.allocator),
.forwards = std.ArrayList(Entry).init(self.allocator),
.backwards = .empty,
.forwards = .empty,
.receiver = Receiver.init(Process.receive, self),
};
return tp.spawn_link(self.allocator, self, Process.start, module_name);
@ -65,8 +65,8 @@ const Process = struct {
fn deinit(self: *Process) void {
self.clear_backwards();
self.clear_forwards();
self.backwards.deinit();
self.forwards.deinit();
self.backwards.deinit(self.allocator);
self.forwards.deinit(self.allocator);
if (self.current) |entry| self.allocator.free(entry.file_path);
self.arena.deinit();
outer_a.destroy(self);
@ -82,7 +82,7 @@ const Process = struct {
fn clear_table(self: *Process, table: *std.ArrayList(Entry)) void {
for (table.items) |entry| self.allocator.free(entry.file_path);
table.clearAndFree();
table.clearAndFree(self.allocator);
}
fn receive(self: *Process, from: tp.pid_ref, m: tp.message) tp.result {
@ -122,17 +122,17 @@ const Process = struct {
return self.allocator.free(self.current.?.file_path);
if (isdupe(self.backwards.getLastOrNull(), entry)) {
if (self.current) |current| self.forwards.append(current) catch {};
if (self.current) |current| self.forwards.append(self.allocator, current) catch {};
if (self.backwards.pop()) |top|
self.allocator.free(top.file_path);
tp.trace(tp.channel.all, tp.message.fmt(.{ "location", "back", entry.file_path, entry.cursor.row, entry.cursor.col, self.backwards.items.len, self.forwards.items.len }));
} else if (isdupe(self.forwards.getLastOrNull(), entry)) {
if (self.current) |current| self.backwards.append(current) catch {};
if (self.current) |current| self.backwards.append(self.allocator, current) catch {};
if (self.forwards.pop()) |top|
self.allocator.free(top.file_path);
tp.trace(tp.channel.all, tp.message.fmt(.{ "location", "forward", entry.file_path, entry.cursor.row, entry.cursor.col, self.backwards.items.len, self.forwards.items.len }));
} else if (self.current) |current| {
try self.backwards.append(current);
try self.backwards.append(self.allocator, current);
tp.trace(tp.channel.all, tp.message.fmt(.{ "location", "new", current.file_path, current.cursor.row, current.cursor.col, self.backwards.items.len, self.forwards.items.len }));
self.clear_forwards();
}

View file

@ -94,16 +94,24 @@ fn receive(self: *Self, from: tp.pid_ref, m: tp.message) tp.result {
} else {
self.store(m);
}
if (!self.no_stderr)
std.io.getStdErr().writer().print("{s}\n", .{output}) catch {};
if (!self.no_stderr) {
var stderr_buffer: [1024]u8 = undefined;
var stderr_writer = std.fs.File.stderr().writer(&stderr_buffer);
stderr_writer.interface.print("{s}\n", .{output}) catch {};
stderr_writer.interface.flush() catch {};
}
} else if (try m.match(.{ "log", tp.string, tp.extract(&output) })) {
if (self.subscriber) |subscriber| {
subscriber.send_raw(m) catch {};
} else {
self.store(m);
}
if (!self.no_stdout)
std.io.getStdOut().writer().print("{s}\n", .{output}) catch {};
if (!self.no_stdout) {
var stdout_buffer: [1024]u8 = undefined;
var stdout_writer = std.fs.File.stdout().writer(&stdout_buffer);
stdout_writer.interface.print("{s}\n", .{output}) catch {};
stdout_writer.interface.flush() catch {};
}
} else if (try m.match(.{"subscribe"})) {
// log("subscribed");
if (self.subscriber) |*s| s.deinit();
@ -152,8 +160,8 @@ pub const Logger = struct {
}
pub fn err(self: Logger, context: []const u8, e: anyerror) void {
var msg_fmt = std.ArrayList(u8).init(std.heap.c_allocator);
defer msg_fmt.deinit();
var msg_fmt: std.ArrayList(u8) = .empty;
defer msg_fmt.deinit(std.heap.c_allocator);
defer tp.reset_error();
var buf: [max_log_message]u8 = undefined;
var msg: []const u8 = "UNKNOWN";
@ -168,12 +176,12 @@ pub const Logger = struct {
//
} else {
var failed = false;
msg_fmt.writer().print("{}", .{msg_}) catch {
msg_fmt.writer(std.heap.c_allocator).print("{f}", .{msg_}) catch {
failed = true;
};
if (failed) {
msg_fmt.clearRetainingCapacity();
msg_fmt.writer().print("{s}", .{std.fmt.fmtSliceEscapeLower(msg_.buf)}) catch {};
msg_fmt.writer(std.heap.c_allocator).print("{f}", .{std.ascii.hexEscape(msg_.buf, .lower)}) catch {};
}
msg__ = msg_fmt.items;
tp.trace(tp.channel.debug, .{ "log_err_fmt", msg__.len, msg__[0..@min(msg__.len, 128)] });

View file

@ -251,27 +251,42 @@ pub fn main() anyerror!void {
const tui_proc = try tui.spawn(a, &ctx, &eh, &env);
defer tui_proc.deinit();
var links = std.ArrayList(file_link.Dest).init(a);
defer links.deinit();
var links: std.ArrayList(file_link.Dest) = .empty;
defer links.deinit(a);
var prev: ?*file_link.Dest = null;
var line_next: ?usize = null;
for (args.positional.trailing.items) |arg| {
var offset_next: ?usize = null;
for (args.positional.trailing) |arg| {
if (arg.len == 0) continue;
if (!args.literal and arg[0] == '+') {
const line = try std.fmt.parseInt(usize, arg[1..], 10);
if (prev) |p| switch (p.*) {
.file => |*file| {
file.line = line;
continue;
},
else => {},
};
line_next = line;
if (arg.len > 2 and arg[1] == 'b') {
const offset = try std.fmt.parseInt(usize, arg[2..], 10);
if (prev) |p| switch (p.*) {
.file => |*file| {
file.offset = offset;
continue;
},
else => {},
};
offset_next = offset;
line_next = null;
} else {
const line = try std.fmt.parseInt(usize, arg[1..], 10);
if (prev) |p| switch (p.*) {
.file => |*file| {
file.line = line;
continue;
},
else => {},
};
line_next = line;
offset_next = null;
}
continue;
}
const curr = try links.addOne();
const curr = try links.addOne(a);
curr.* = if (!args.literal) try file_link.parse(arg) else .{ .file = .{ .path = arg } };
prev = curr;
@ -284,6 +299,15 @@ pub fn main() anyerror!void {
else => {},
}
}
if (offset_next) |offset| {
switch (curr.*) {
.file => |*file| {
file.offset = offset;
offset_next = null;
},
else => {},
}
}
}
var have_project = false;
@ -330,9 +354,9 @@ pub fn main() anyerror!void {
while (count_args_.next()) |_| count += 1;
if (count == 0) break;
var msg = std.ArrayList(u8).init(a);
var msg: std.Io.Writer.Allocating = .init(a);
defer msg.deinit();
const writer = msg.writer();
const writer = &msg.writer;
var cmd_args = std.mem.splitScalar(u8, cmd, ':');
const cmd_ = cmd_args.next();
@ -348,7 +372,7 @@ pub fn main() anyerror!void {
try cbor.writeValue(writer, arg);
}
try tui_proc.send_raw(.{ .buf = msg.items });
try tui_proc.send_raw(.{ .buf = msg.written() });
}
}
@ -367,7 +391,10 @@ pub fn print_exit_status(_: void, msg: []const u8) void {
} else if (std.mem.eql(u8, msg, "restart")) {
want_restart = true;
} else {
std.io.getStdErr().writer().print("\n" ++ application_name ++ " ERROR: {s}\n", .{msg}) catch {};
var stderr_buffer: [1024]u8 = undefined;
var stderr_writer = std.fs.File.stderr().writer(&stderr_buffer);
stderr_writer.interface.print("\n" ++ application_name ++ " ERROR: {s}\n", .{msg}) catch {};
stderr_writer.interface.flush() catch {};
final_exit_status = 1;
}
}
@ -410,31 +437,31 @@ fn trace_to_file(m: thespian.message.c_buffer_type) callconv(.c) void {
}
}
};
const a = std.heap.c_allocator;
var state: *State = &(State.state orelse init: {
const a = std.heap.c_allocator;
var path = std.ArrayList(u8).init(a);
var path: std.Io.Writer.Allocating = .init(a);
defer path.deinit();
path.writer().print("{s}{c}trace.log", .{ get_state_dir() catch return, sep }) catch return;
const file = std.fs.createFileAbsolute(path.items, .{ .truncate = true }) catch return;
path.writer.print("{s}{c}trace.log", .{ get_state_dir() catch return, sep }) catch return;
const file = std.fs.createFileAbsolute(path.written(), .{ .truncate = true }) catch return;
State.state = .{
.file = file,
.last_time = std.time.microTimestamp(),
};
break :init State.state.?;
});
const file_writer = state.file.writer();
var buffer = std.io.bufferedWriter(file_writer);
const writer = buffer.writer();
var buffer: [4096]u8 = undefined;
var file_writer = state.file.writer(&buffer);
const writer = &file_writer.interface;
const ts = std.time.microTimestamp();
State.write_tdiff(writer, ts - state.last_time) catch {};
state.last_time = ts;
var stream = std.json.writeStream(writer, .{});
var stream: std.json.Stringify = .{ .writer = writer };
var iter: []const u8 = m.base[0..m.len];
cbor.JsonStream(@TypeOf(buffer)).jsonWriteValue(&stream, &iter) catch {};
cbor.JsonWriter.jsonWriteValue(&stream, &iter) catch {};
_ = writer.write("\n") catch {};
buffer.flush() catch {};
writer.flush() catch {};
}
pub fn exit(status: u8) noreturn {
@ -509,9 +536,9 @@ fn read_text_config_file(T: type, allocator: std.mem.Allocator, conf: *T, bufs_:
}
pub fn parse_text_config_file(T: type, allocator: std.mem.Allocator, conf: *T, bufs_: *[][]const u8, file_name: []const u8, content: []const u8) !void {
var cbor_buf = std.ArrayList(u8).init(allocator);
var cbor_buf: std.Io.Writer.Allocating = .init(allocator);
defer cbor_buf.deinit();
const writer = cbor_buf.writer();
const writer = &cbor_buf.writer;
var it = std.mem.splitScalar(u8, content, '\n');
var lineno: u32 = 0;
while (it.next()) |line| {
@ -530,7 +557,7 @@ pub fn parse_text_config_file(T: type, allocator: std.mem.Allocator, conf: *T, b
};
defer allocator.free(cb);
try cbor.writeValue(writer, name);
try cbor_buf.appendSlice(cb);
try writer.writeAll(cb);
}
const cb = try cbor_buf.toOwnedSlice();
var bufs = std.ArrayListUnmanaged([]const u8).fromOwnedSlice(bufs_.*);
@ -626,7 +653,7 @@ fn read_nested_include_files(T: type, allocator: std.mem.Allocator, conf: *T, bu
};
}
pub const ConfigWriteError = error{ CreateConfigFileFailed, WriteConfigFileFailed };
pub const ConfigWriteError = error{ CreateConfigFileFailed, WriteConfigFileFailed, WriteFailed };
pub fn write_config(conf: anytype, allocator: std.mem.Allocator) (ConfigDirError || ConfigWriteError)!void {
config_mutex.lock();
@ -643,14 +670,16 @@ fn write_text_config_file(comptime T: type, data: T, file_name: []const u8) Conf
return error.CreateConfigFileFailed;
};
defer file.close();
const writer = file.writer();
write_config_to_writer(T, data, writer) catch |e| {
var buf: [4096]u8 = undefined;
var writer = file.writer(&buf);
write_config_to_writer(T, data, &writer.interface) catch |e| {
std.log.err("write file failed with {any} for: {s}", .{ e, file_name });
return error.WriteConfigFileFailed;
};
try writer.interface.flush();
}
pub fn write_config_to_writer(comptime T: type, data: T, writer: anytype) @TypeOf(writer).Error!void {
pub fn write_config_to_writer(comptime T: type, data: T, writer: *std.Io.Writer) std.Io.Writer.Error!void {
const default: T = .{};
inline for (@typeInfo(T).@"struct".fields) |field_info| {
if (config_eql(
@ -669,7 +698,7 @@ pub fn write_config_to_writer(comptime T: type, data: T, writer: anytype) @TypeO
else
try writer.writeAll("null"),
else => {
var s = std.json.writeStream(writer, .{ .whitespace = .minified });
var s: std.json.Stringify = .{ .writer = writer, .options = .{ .whitespace = .minified } };
try s.write(@field(data, field_info.name));
},
}
@ -677,7 +706,7 @@ pub fn write_config_to_writer(comptime T: type, data: T, writer: anytype) @TypeO
}
}
fn write_color_value(value: u24, writer: anytype) @TypeOf(writer).Error!void {
fn write_color_value(value: u24, writer: *std.Io.Writer) std.Io.Writer.Error!void {
var hex: [7]u8 = undefined;
try writer.writeByte('"');
try writer.writeAll(color.RGB.to_string(color.RGB.from_u24(value), &hex));
@ -746,15 +775,15 @@ pub fn write_keybind_namespace(namespace_name: []const u8, content: []const u8)
pub fn list_keybind_namespaces(allocator: std.mem.Allocator) ![]const []const u8 {
var dir = try std.fs.openDirAbsolute(try get_keybind_namespaces_directory(), .{ .iterate = true });
defer dir.close();
var result = std.ArrayList([]const u8).init(allocator);
var result: std.ArrayList([]const u8) = .empty;
var iter = dir.iterateAssumeFirstIteration();
while (try iter.next()) |entry| {
switch (entry.kind) {
.file, .sym_link => try result.append(try allocator.dupe(u8, std.fs.path.stem(entry.name))),
.file, .sym_link => try result.append(allocator, try allocator.dupe(u8, std.fs.path.stem(entry.name))),
else => continue,
}
}
return result.toOwnedSlice();
return result.toOwnedSlice(allocator);
}
pub fn read_theme(allocator: std.mem.Allocator, theme_name: []const u8) ?[]const u8 {
@ -1028,7 +1057,10 @@ fn restart() noreturn {
null,
};
const ret = std.c.execve(executable, @ptrCast(&argv), @ptrCast(std.os.environ));
std.io.getStdErr().writer().print("\nrestart failed: {d}", .{ret}) catch {};
var stderr_buffer: [1024]u8 = undefined;
var stderr_writer = std.fs.File.stderr().writer(&stderr_buffer);
stderr_writer.interface.print("\nrestart failed: {d}", .{ret}) catch {};
stderr_writer.interface.flush() catch {};
exit(234);
}

View file

@ -463,12 +463,13 @@ const Process = struct {
}
fn request_recent_projects(self: *Process, from: tp.pid_ref, project_directory: []const u8) (ProjectError || Project.ClientError)!void {
var recent_projects = std.ArrayList(RecentProject).init(self.allocator);
defer recent_projects.deinit();
var recent_projects: std.ArrayList(RecentProject) = .empty;
defer recent_projects.deinit(self.allocator);
self.load_recent_projects(&recent_projects, project_directory) catch {};
self.sort_projects_by_last_used(&recent_projects);
var message = std.ArrayList(u8).init(self.allocator);
const writer = message.writer();
var message: std.Io.Writer.Allocating = .init(self.allocator);
defer message.deinit();
const writer = &message.writer;
try cbor.writeArrayHeader(writer, 3);
try cbor.writeValue(writer, "PRJ");
try cbor.writeValue(writer, "recent_projects");
@ -478,7 +479,7 @@ const Process = struct {
try cbor.writeValue(writer, project.name);
try cbor.writeValue(writer, if (self.projects.get(project.name)) |_| true else false);
}
from.send_raw(.{ .buf = message.items }) catch return error.ClientFailed;
from.send_raw(.{ .buf = message.written() }) catch return error.ClientFailed;
self.logger.print("{d} projects found", .{recent_projects.items.len});
}
@ -493,9 +494,9 @@ const Process = struct {
fn request_path_files(self: *Process, from: tp.pid_ref, project_directory: []const u8, max: usize, path: []const u8) (ProjectError || SpawnError || std.fs.Dir.OpenError)!void {
const project = self.projects.get(project_directory) orelse return error.NoProject;
var buf = std.ArrayList(u8).init(self.allocator);
defer buf.deinit();
try request_path_files_async(self.allocator, from, project, max, expand_home(&buf, path));
var buf: std.ArrayList(u8) = .empty;
defer buf.deinit(self.allocator);
try request_path_files_async(self.allocator, from, project, max, expand_home(self.allocator, &buf, path));
}
fn request_tasks(self: *Process, from: tp.pid_ref, project_directory: []const u8) (ProjectError || Project.ClientError)!void {
@ -651,9 +652,10 @@ const Process = struct {
defer self.allocator.free(file_name);
var file = try std.fs.createFileAbsolute(file_name, .{ .truncate = true });
defer file.close();
var buffer = std.io.bufferedWriter(file.writer());
defer buffer.flush() catch {};
try project.write_state(buffer.writer());
var buffer: [4096]u8 = undefined;
var writer = file.writer(&buffer);
defer writer.interface.flush() catch {};
try project.write_state(&writer.interface);
}
fn restore_project(self: *Process, project: *Project) !void {
@ -674,13 +676,14 @@ const Process = struct {
fn get_project_state_file_path(allocator: std.mem.Allocator, project: *Project) ![]const u8 {
const path = project.name;
var stream = std.ArrayList(u8).init(allocator);
const writer = stream.writer();
var stream: std.Io.Writer.Allocating = .init(allocator);
defer stream.deinit();
const writer = &stream.writer;
_ = try writer.write(try root.get_state_dir());
_ = try writer.writeByte(std.fs.path.sep);
_ = try writer.write("projects");
_ = try writer.writeByte(std.fs.path.sep);
std.fs.makeDirAbsolute(stream.items) catch |e| switch (e) {
std.fs.makeDirAbsolute(stream.written()) catch |e| switch (e) {
error.PathAlreadyExists => {},
else => return e,
};
@ -696,19 +699,19 @@ const Process = struct {
}
fn load_recent_projects(self: *Process, recent_projects: *std.ArrayList(RecentProject), project_directory: []const u8) !void {
var path = std.ArrayList(u8).init(self.allocator);
var path: std.Io.Writer.Allocating = .init(self.allocator);
defer path.deinit();
const writer = path.writer();
const writer = &path.writer;
_ = try writer.write(try root.get_state_dir());
_ = try writer.writeByte(std.fs.path.sep);
_ = try writer.write("projects");
var dir = try std.fs.cwd().openDir(path.items, .{ .iterate = true });
var dir = try std.fs.cwd().openDir(path.written(), .{ .iterate = true });
defer dir.close();
var iter = dir.iterate();
while (try iter.next()) |entry| {
if (entry.kind != .file) continue;
try self.read_project_name(path.items, entry.name, recent_projects, project_directory);
try self.read_project_name(path.written(), entry.name, recent_projects, project_directory);
}
}
@ -719,14 +722,14 @@ const Process = struct {
recent_projects: *std.ArrayList(RecentProject),
project_directory: []const u8,
) !void {
var path = std.ArrayList(u8).init(self.allocator);
var path: std.Io.Writer.Allocating = .init(self.allocator);
defer path.deinit();
const writer = path.writer();
const writer = &path.writer;
_ = try writer.write(state_dir);
_ = try writer.writeByte(std.fs.path.sep);
_ = try writer.write(file_path);
var file = try std.fs.openFileAbsolute(path.items, .{ .mode = .read_only });
var file = try std.fs.openFileAbsolute(path.written(), .{ .mode = .read_only });
defer file.close();
const stat = try file.stat();
const buffer = try self.allocator.alloc(u8, @intCast(stat.size));
@ -737,7 +740,7 @@ const Process = struct {
var name: []const u8 = undefined;
if (cbor.matchValue(&iter, tp.extract(&name)) catch return) {
const last_used = if (std.mem.eql(u8, project_directory, name)) std.math.maxInt(@TypeOf(stat.mtime)) else stat.mtime;
(try recent_projects.addOne()).* = .{ .name = try self.allocator.dupe(u8, name), .last_used = last_used };
(try recent_projects.addOne(self.allocator)).* = .{ .name = try self.allocator.dupe(u8, name), .last_used = last_used };
}
}
@ -851,14 +854,14 @@ pub fn abbreviate_home(buf: []u8, path: []const u8) []const u8 {
}
}
pub fn expand_home(buf: *std.ArrayList(u8), file_path: []const u8) []const u8 {
pub fn expand_home(allocator: std.mem.Allocator, buf: *std.ArrayList(u8), file_path: []const u8) []const u8 {
if (builtin.os.tag == .windows) return file_path;
if (file_path.len > 0 and file_path[0] == '~') {
if (file_path.len > 1 and file_path[1] != std.fs.path.sep) return file_path;
const homedir = std.posix.getenv("HOME") orelse return file_path;
buf.appendSlice(homedir) catch return file_path;
buf.append(std.fs.path.sep) catch return file_path;
buf.appendSlice(file_path[2..]) catch return file_path;
buf.appendSlice(allocator, homedir) catch return file_path;
buf.append(allocator, std.fs.path.sep) catch return file_path;
buf.appendSlice(allocator, file_path[2..]) catch return file_path;
return buf.items;
} else return file_path;
}

View file

@ -181,7 +181,7 @@ pub fn putstr(self: *Plane, text: []const u8) !usize {
var result: usize = 0;
const height = self.window.height;
const width = self.window.width;
var iter = self.window.screen.unicode.graphemeIterator(text);
var iter = self.window.unicode.graphemeIterator(text);
while (iter.next()) |grapheme| {
const s = grapheme.bytes(text);
if (std.mem.eql(u8, s, "\n")) {
@ -443,7 +443,7 @@ pub fn egc_length(self: *const Plane, egcs: []const u8, colcount: *c_int, abs_co
colcount.* = @intCast(tab_width - (abs_col % tab_width));
return 1;
}
var iter = self.window.screen.unicode.graphemeIterator(egcs);
var iter = self.window.unicode.graphemeIterator(egcs);
const grapheme = iter.next() orelse {
colcount.* = 1;
return 1;
@ -470,7 +470,7 @@ pub fn egc_chunk_width(self: *const Plane, chunk_: []const u8, abs_col_: usize,
}
pub fn egc_last(self: *const Plane, egcs: []const u8) []const u8 {
var iter = self.window.screen.unicode.graphemeIterator(egcs);
var iter = self.window.unicode.graphemeIterator(egcs);
var last: []const u8 = egcs[0..0];
while (iter.next()) |grapheme| last = grapheme.bytes(egcs);
return last;

View file

@ -1,5 +1,6 @@
const vaxis = @import("vaxis");
const Io = @import("std").Io;
const meta = @import("std").meta;
const unicode = @import("std").unicode;
const FormatOptions = @import("std").fmt.FormatOptions;
@ -88,12 +89,12 @@ pub const KeyEvent = struct {
return self.modifiers & ~mod.caps_lock;
}
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) !void {
const mods = self.mods_no_shifts();
return if (self.event > 0)
writer.print("{}:{}{}", .{ event_fmt(self.event), mod_fmt(mods), key_fmt(self.key) })
writer.print("{f}:{f}{f}", .{ event_fmt(self.event), mod_fmt(mods), key_fmt(self.key) })
else
writer.print("{}{}", .{ mod_fmt(mods), key_fmt(self.key) });
writer.print("{f}{f}", .{ mod_fmt(mods), key_fmt(self.key) });
}
pub fn from_key(keypress: Key) @This() {
@ -231,6 +232,25 @@ pub const utils = struct {
vaxis.Key.f33 => "f33",
vaxis.Key.f34 => "f34",
vaxis.Key.f35 => "f35",
vaxis.Key.kp_decimal => "kp_decimal",
vaxis.Key.kp_divide => "kp_divide",
vaxis.Key.kp_multiply => "kp_multiply",
vaxis.Key.kp_subtract => "kp_subtract",
vaxis.Key.kp_add => "kp_add",
vaxis.Key.kp_enter => "kp_enter",
vaxis.Key.kp_equal => "kp_equal",
vaxis.Key.kp_separator => "kp_separator",
vaxis.Key.kp_left => "kp_left",
vaxis.Key.kp_right => "kp_right",
vaxis.Key.kp_up => "kp_up",
vaxis.Key.kp_down => "kp_down",
vaxis.Key.kp_page_up => "kp_page_up",
vaxis.Key.kp_page_down => "kp_page_down",
vaxis.Key.kp_home => "kp_home",
vaxis.Key.kp_end => "kp_end",
vaxis.Key.kp_insert => "kp_insert",
vaxis.Key.kp_delete => "kp_delete",
vaxis.Key.kp_begin => "kp_begin",
vaxis.Key.media_play => "media_play",
vaxis.Key.media_pause => "media_pause",
vaxis.Key.media_play_pause => "media_play_pause",
@ -316,8 +336,8 @@ pub const utils = struct {
pub fn key_event_short_fmt(ke: KeyEvent) struct {
ke: KeyEvent,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
return writer.print("{}{}", .{ mod_short_fmt(self.ke.modifiers), key_short_fmt(self.ke.key) });
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
return writer.print("{f}{f}", .{ mod_short_fmt(self.ke.modifiers), key_short_fmt(self.ke.key) });
}
} {
return .{ .ke = ke };
@ -325,7 +345,7 @@ pub fn key_event_short_fmt(ke: KeyEvent) struct {
pub fn event_fmt(evt: Event) struct {
event: Event,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
return switch (self.event) {
event.press => writer.writeAll("press"),
event.repeat => writer.writeAll("repeat"),
@ -339,7 +359,7 @@ pub fn event_fmt(evt: Event) struct {
pub fn event_short_fmt(evt: Event) struct {
event: Event,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
return switch (self.event) {
event.press => writer.writeAll("P"),
event.repeat => writer.writeAll("RP"),
@ -353,11 +373,11 @@ pub fn event_short_fmt(evt: Event) struct {
pub fn key_fmt(key_: Key) struct {
key: Key,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
var key_string = utils.key_id_string(self.key);
var buf: [6]u8 = undefined;
if (key_string.len == 0) {
const bytes = try ucs32_to_utf8(&[_]u32{self.key}, &buf);
const bytes = ucs32_to_utf8(&[_]u32{self.key}, &buf) catch return error.WriteFailed;
key_string = buf[0..bytes];
}
try writer.writeAll(key_string);
@ -368,7 +388,7 @@ pub fn key_fmt(key_: Key) struct {
pub fn key_short_fmt(key_: Key) struct {
key: Key,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
var key_string = utils.key_id_string_short(self.key);
var buf: [6]u8 = undefined;
if (key_string.len == 0) {
@ -383,7 +403,7 @@ pub fn key_short_fmt(key_: Key) struct {
pub fn mod_fmt(mods: Mods) struct {
modifiers: Mods,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
const modset: ModSet = @bitCast(self.modifiers);
if (modset.super) try writer.writeAll("super+");
if (modset.ctrl) try writer.writeAll("ctrl+");
@ -396,7 +416,7 @@ pub fn mod_fmt(mods: Mods) struct {
pub fn mod_short_fmt(mods: Mods) struct {
modifiers: Mods,
pub fn format(self: @This(), comptime _: []const u8, _: FormatOptions, writer: anytype) !void {
pub fn format(self: @This(), writer: anytype) Io.Writer.Error!void {
const modset: ModSet = @bitCast(self.modifiers);
if (modset.super) try writer.writeAll("Super-");
if (modset.ctrl) try writer.writeAll("C-");

View file

@ -22,15 +22,16 @@ allocator: std.mem.Allocator,
tty: vaxis.Tty,
vx: vaxis.Vaxis,
tty_buffer: []u8,
no_alternate: bool,
event_buffer: std.ArrayList(u8),
input_buffer: std.ArrayList(u8),
event_buffer: std.Io.Writer.Allocating,
input_buffer: std.Io.Writer.Allocating,
mods: vaxis.Key.Modifiers = .{},
queries_done: bool,
bracketed_paste: bool = false,
bracketed_paste_buffer: std.ArrayList(u8),
bracketed_paste_buffer: std.Io.Writer.Allocating,
handler_ctx: *anyopaque,
dispatch_input: ?*const fn (ctx: *anyopaque, cbor_msg: []const u8) void = null,
@ -61,6 +62,7 @@ pub const Error = error{
BadArrayAllocExtract,
InvalidMapType,
InvalidUnion,
WriteFailed,
} || std.Thread.SpawnError;
pub fn init(allocator: std.mem.Allocator, handler_ctx: *anyopaque, no_alternate: bool, _: *const fn (ctx: *anyopaque) void) Error!Self {
@ -74,13 +76,15 @@ pub fn init(allocator: std.mem.Allocator, handler_ctx: *anyopaque, no_alternate:
},
.system_clipboard_allocator = allocator,
};
const tty_buffer = try allocator.alloc(u8, 4096);
return .{
.allocator = allocator,
.tty = vaxis.Tty.init() catch return error.TtyInitError,
.tty = vaxis.Tty.init(tty_buffer) catch return error.TtyInitError,
.tty_buffer = tty_buffer,
.vx = try vaxis.init(allocator, opts),
.no_alternate = no_alternate,
.event_buffer = std.ArrayList(u8).init(allocator),
.input_buffer = std.ArrayList(u8).init(allocator),
.event_buffer = .init(allocator),
.input_buffer = .init(allocator),
.bracketed_paste_buffer = std.ArrayList(u8).init(allocator),
.handler_ctx = handler_ctx,
.logger = log.logger(log_name),
@ -94,6 +98,7 @@ pub fn deinit(self: *Self) void {
self.loop.stop();
self.vx.deinit(self.allocator, self.tty.anyWriter());
self.tty.deinit();
self.allocator.free(self.tty_buffer);
self.bracketed_paste_buffer.deinit();
self.input_buffer.deinit();
self.event_buffer.deinit();
@ -204,9 +209,8 @@ pub fn run(self: *Self) Error!void {
pub fn render(self: *Self) !void {
if (in_panic.load(.acquire)) return;
var bufferedWriter = self.tty.bufferedWriter();
try self.vx.render(bufferedWriter.writer().any());
try bufferedWriter.flush();
try self.vx.render(&self.tty.writer.interface);
try self.tty.writer.interface.flush();
}
pub fn sigwinch(self: *Self) !void {
@ -214,7 +218,7 @@ pub fn sigwinch(self: *Self) !void {
try self.resize(try vaxis.Tty.getWinsize(self.input_fd_blocking()));
}
fn resize(self: *Self, ws: vaxis.Winsize) error{ TtyWriteError, OutOfMemory }!void {
fn resize(self: *Self, ws: vaxis.Winsize) error{ TtyWriteError, OutOfMemory, WriteFailed }!void {
self.vx.resize(self.allocator, self.tty.anyWriter(), ws) catch return error.TtyWriteError;
self.vx.queueRefresh();
if (self.dispatch_event) |f| f(self.handler_ctx, try self.fmtmsg(.{"resize"}));
@ -394,25 +398,26 @@ pub fn process_renderer_event(self: *Self, msg: []const u8) Error!void {
}
}
fn fmtmsg(self: *Self, value: anytype) std.ArrayList(u8).Writer.Error![]const u8 {
fn fmtmsg(self: *Self, value: anytype) std.Io.Writer.Error![]const u8 {
self.event_buffer.clearRetainingCapacity();
try cbor.writeValue(self.event_buffer.writer(), value);
return self.event_buffer.items;
try cbor.writeValue(&self.event_buffer.writer, value);
return self.event_buffer.written();
}
fn handle_bracketed_paste_input(self: *Self, cbor_msg: []const u8) !bool {
var keypress: input.Key = undefined;
var egc_: input.Key = undefined;
var mods: usize = undefined;
const writer = &self.bracketed_paste_buffer.writer;
if (try cbor.match(cbor_msg, .{ "I", cbor.number, cbor.extract(&keypress), cbor.extract(&egc_), cbor.string, cbor.extract(&mods) })) {
switch (keypress) {
106 => if (mods == 4) try self.bracketed_paste_buffer.appendSlice("\n") else try self.bracketed_paste_buffer.appendSlice("j"),
input.key.enter => try self.bracketed_paste_buffer.appendSlice("\n"),
input.key.tab => try self.bracketed_paste_buffer.appendSlice("\t"),
106 => if (mods == 4) try writer.writeAll("\n") else try writer.writeAll("j"),
input.key.enter => try writer.writeAll("\n"),
input.key.tab => try writer.writeAll("\t"),
else => if (!input.is_non_input_key(keypress)) {
var buf: [6]u8 = undefined;
const bytes = try input.ucs32_to_utf8(&[_]u32{egc_}, &buf);
try self.bracketed_paste_buffer.appendSlice(buf[0..bytes]);
try writer.writeAll(buf[0..bytes]);
} else {
var buf: [6]u8 = undefined;
const bytes = try input.ucs32_to_utf8(&[_]u32{egc_}, &buf);
@ -431,16 +436,16 @@ fn handle_bracketed_paste_start(self: *Self) !void {
fn handle_bracketed_paste_end(self: *Self) !void {
defer {
self.bracketed_paste_buffer.clearAndFree();
self.bracketed_paste_buffer.clearRetainingCapacity();
self.bracketed_paste = false;
}
if (!self.bracketed_paste) return;
if (self.dispatch_event) |f| f(self.handler_ctx, try self.fmtmsg(.{ "system_clipboard", self.bracketed_paste_buffer.items }));
if (self.dispatch_event) |f| f(self.handler_ctx, try self.fmtmsg(.{ "system_clipboard", self.bracketed_paste_buffer.written() }));
}
fn handle_bracketed_paste_error(self: *Self, e: Error) !void {
self.logger.err("bracketed paste", e);
self.bracketed_paste_buffer.clearAndFree();
self.bracketed_paste_buffer.clearRetainingCapacity();
self.bracketed_paste = false;
return e;
}

View file

@ -102,37 +102,41 @@ pub const CurSel = struct {
}
pub fn enable_selection(self: *Self, root: Buffer.Root, metrics: Buffer.Metrics) !*Selection {
return switch (tui.get_selection_style()) {
.normal => self.enable_selection_normal(),
.inclusive => try self.enable_selection_inclusive(root, metrics),
};
self.selection = try self.to_selection(root, metrics);
return if (self.selection) |*sel| sel else unreachable;
}
pub fn enable_selection_normal(self: *Self) *Selection {
return if (self.selection) |*sel|
sel
else cod: {
self.selection = Selection.from_cursor(&self.cursor);
break :cod &self.selection.?;
self.selection = self.to_selection_normal();
return if (self.selection) |*sel| sel else unreachable;
}
pub fn to_selection(self: *const Self, root: Buffer.Root, metrics: Buffer.Metrics) !Selection {
return switch (tui.get_selection_style()) {
.normal => self.to_selection_normal(),
.inclusive => try self.to_selection_inclusive(root, metrics),
};
}
fn enable_selection_inclusive(self: *Self, root: Buffer.Root, metrics: Buffer.Metrics) !*Selection {
return if (self.selection) |*sel|
fn to_selection_normal(self: *const Self) Selection {
return if (self.selection) |sel| sel else Selection.from_cursor(&self.cursor);
}
fn to_selection_inclusive(self: *const Self, root: Buffer.Root, metrics: Buffer.Metrics) !Selection {
return if (self.selection) |sel|
sel
else cod: {
self.selection = Selection.from_cursor(&self.cursor);
try self.selection.?.end.move_right(root, metrics);
try self.cursor.move_right(root, metrics);
break :cod &self.selection.?;
var sel = Selection.from_cursor(&self.cursor);
try sel.end.move_right(root, metrics);
break :cod sel;
};
}
fn to_inclusive_cursor(self: *const Self, root: Buffer.Root, metrics: Buffer.Metrics) !Cursor {
var res = self.cursor;
fn to_cursor_inclusive(self: *const Self, root: Buffer.Root, metrics: Buffer.Metrics) !Cursor {
var cursor = self.cursor;
if (self.selection) |sel| if (!sel.is_reversed())
try res.move_left(root, metrics);
return res;
try cursor.move_left(root, metrics);
return cursor;
}
pub fn disable_selection(self: *Self, root: Buffer.Root, metrics: Buffer.Metrics) void {
@ -170,9 +174,8 @@ pub const CurSel = struct {
return sel;
}
fn select_node(self: *Self, node: syntax.Node, root: Buffer.Root, metrics: Buffer.Metrics) error{NotFound}!void {
const range = node.getRange();
self.selection = .{
fn selection_from_range(range: syntax.Range, root: Buffer.Root, metrics: Buffer.Metrics) error{NotFound}!Selection {
return .{
.begin = .{
.row = range.start_point.row,
.col = try root.pos_to_width(range.start_point.row, range.start_point.column, metrics),
@ -182,7 +185,23 @@ pub const CurSel = struct {
.col = try root.pos_to_width(range.end_point.row, range.end_point.column, metrics),
},
};
self.cursor = self.selection.?.end;
}
fn selection_from_node(node: syntax.Node, root: Buffer.Root, metrics: Buffer.Metrics) error{NotFound}!Selection {
return selection_from_range(node.getRange(), root, metrics);
}
fn select_node(self: *Self, node: syntax.Node, root: Buffer.Root, metrics: Buffer.Metrics) error{NotFound}!void {
const sel = try selection_from_node(node, root, metrics);
self.selection = sel;
self.cursor = sel.end;
}
fn select_parent_node(self: *Self, node: syntax.Node, root: Buffer.Root, metrics: Buffer.Metrics) error{NotFound}!syntax.Node {
const parent = node.getParent();
if (parent.isNull()) return error.NotFound;
try self.select_node(parent, root, metrics);
return parent;
}
fn write(self: *const Self, writer: Buffer.MetaWriter) !void {
@ -1192,7 +1211,7 @@ pub const Editor = struct {
fn get_rendered_cursor(self: *Self, style: anytype, cursel: anytype) !Cursor {
return switch (style) {
.normal => cursel.cursor,
.inclusive => try cursel.to_inclusive_cursor(try self.buf_root(), self.metrics),
.inclusive => try cursel.to_cursor_inclusive(try self.buf_root(), self.metrics),
};
}
@ -4279,7 +4298,7 @@ pub const Editor = struct {
}
pub const selections_reverse_meta: Meta = .{ .description = "Reverse selection" };
fn node_at_selection(self: *Self, sel: Selection, root: Buffer.Root, metrics: Buffer.Metrics) error{Stop}!syntax.Node {
fn node_at_selection(self: *const Self, sel: Selection, root: Buffer.Root, metrics: Buffer.Metrics) error{Stop}!syntax.Node {
const syn = self.syntax orelse return error.Stop;
const node = try syn.node_at_point_range(.{
.start_point = .{
@ -4297,19 +4316,35 @@ pub const Editor = struct {
return node;
}
fn select_node_at_cursor(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
cursel.disable_selection(root, self.metrics);
const sel = (try cursel.enable_selection(root, self.metrics)).*;
return cursel.select_node(try self.node_at_selection(sel, root, metrics), root, metrics);
fn top_node_at_selection(self: *const Self, sel: Selection, root: Buffer.Root, metrics: Buffer.Metrics) error{Stop}!syntax.Node {
var node = try self.node_at_selection(sel, root, metrics);
if (node.isNull()) return node;
var parent = node.getParent();
if (parent.isNull()) return node;
const node_sel = CurSel.selection_from_node(node, root, metrics) catch return node;
var parent_sel = CurSel.selection_from_node(parent, root, metrics) catch return node;
while (parent_sel.eql(node_sel)) {
node = parent;
parent = parent.getParent();
parent_sel = CurSel.selection_from_node(parent, root, metrics) catch return node;
}
return node;
}
fn top_node_at_cursel(self: *const Self, cursel: *const CurSel, root: Buffer.Root, metrics: Buffer.Metrics) error{Stop}!syntax.Node {
const sel = try cursel.to_selection(root, metrics);
return try self.top_node_at_selection(sel, root, metrics);
}
fn expand_selection_to_parent_node(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
const sel = (try cursel.enable_selection(root, metrics)).*;
const node = try self.node_at_selection(sel, root, metrics);
var node = try self.top_node_at_selection(sel, root, metrics);
if (node.isNull()) return error.Stop;
const parent = node.getParent();
if (parent.isNull()) return error.Stop;
return cursel.select_node(parent, root, metrics);
var node_sel = try CurSel.selection_from_node(node, root, metrics);
if (!node_sel.eql(sel)) return cursel.select_node(node, root, metrics);
node = try cursel.select_parent_node(node, root, metrics);
while (cursel.selection.?.eql(sel))
node = try cursel.select_parent_node(node, root, metrics);
}
pub fn expand_selection(self: *Self, _: Context) Result {
@ -4320,7 +4355,7 @@ pub const Editor = struct {
try if (cursel.selection) |_|
self.expand_selection_to_parent_node(root, cursel, self.metrics)
else
self.select_node_at_cursor(root, cursel, self.metrics);
cursel.select_node(try self.top_node_at_cursel(cursel, root, self.metrics), root, self.metrics);
self.clamp();
try self.send_editor_jump_destination();
}
@ -4362,8 +4397,7 @@ pub const Editor = struct {
pub const shrink_selection_meta: Meta = .{ .description = "Shrink selection to first AST child node" };
fn select_next_sibling_node(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
const sel = (try cursel.enable_selection(root, metrics)).*;
const node = try self.node_at_selection(sel, root, metrics);
const node = try self.top_node_at_cursel(cursel, root, metrics);
if (node.isNull()) return error.Stop;
const sibling = syntax.Node.externs.ts_node_next_sibling(node);
if (sibling.isNull()) return error.Stop;
@ -4371,8 +4405,7 @@ pub const Editor = struct {
}
fn select_next_named_sibling_node(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
const sel = (try cursel.enable_selection(root, metrics)).*;
const node = try self.node_at_selection(sel, root, metrics);
const node = try self.top_node_at_cursel(cursel, root, metrics);
if (node.isNull()) return error.Stop;
const sibling = syntax.Node.externs.ts_node_next_named_sibling(node);
if (sibling.isNull()) return error.Stop;
@ -4386,19 +4419,17 @@ pub const Editor = struct {
const root = try self.buf_root();
const cursel = self.get_primary();
cursel.check_selection(root, self.metrics);
if (cursel.selection) |_|
try if (unnamed)
self.select_next_sibling_node(root, cursel, self.metrics)
else
self.select_next_named_sibling_node(root, cursel, self.metrics);
try if (unnamed)
self.select_next_sibling_node(root, cursel, self.metrics)
else
self.select_next_named_sibling_node(root, cursel, self.metrics);
self.clamp();
try self.send_editor_jump_destination();
}
pub const select_next_sibling_meta: Meta = .{ .description = "Move selection to next AST sibling node" };
fn select_prev_sibling_node(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
const sel = (try cursel.enable_selection(root, metrics)).*;
const node = try self.node_at_selection(sel, root, metrics);
const node = try self.top_node_at_cursel(cursel, root, metrics);
if (node.isNull()) return error.Stop;
const sibling = syntax.Node.externs.ts_node_prev_sibling(node);
if (sibling.isNull()) return error.Stop;
@ -4406,8 +4437,7 @@ pub const Editor = struct {
}
fn select_prev_named_sibling_node(self: *Self, root: Buffer.Root, cursel: *CurSel, metrics: Buffer.Metrics) !void {
const sel = (try cursel.enable_selection(root, metrics)).*;
const node = try self.node_at_selection(sel, root, metrics);
const node = try self.top_node_at_cursel(cursel, root, metrics);
if (node.isNull()) return error.Stop;
const sibling = syntax.Node.externs.ts_node_prev_named_sibling(node);
if (sibling.isNull()) return error.Stop;
@ -4421,11 +4451,10 @@ pub const Editor = struct {
const root = try self.buf_root();
const cursel = self.get_primary();
cursel.check_selection(root, self.metrics);
if (cursel.selection) |_|
try if (unnamed)
self.select_prev_sibling_node(root, cursel, self.metrics)
else
self.select_prev_named_sibling_node(root, cursel, self.metrics);
try if (unnamed)
self.select_prev_sibling_node(root, cursel, self.metrics)
else
self.select_prev_named_sibling_node(root, cursel, self.metrics);
self.clamp();
try self.send_editor_jump_destination();
}
@ -5477,6 +5506,28 @@ pub const Editor = struct {
}
pub const goto_line_and_column_meta: Meta = .{ .arguments = &.{ .integer, .integer } };
pub fn goto_byte_offset(self: *Self, ctx: Context) Result {
try self.send_editor_jump_source();
var offset: usize = 0;
if (try ctx.args.match(.{
tp.extract(&offset),
})) {
// self.logger.print("goto: byte offset:{d}", .{ offset });
} else return error.InvalidGotoByteOffsetArgument;
self.cancel_all_selections();
const root = self.buf_root() catch return;
const eol_mode = self.buf_eol_mode() catch return;
const primary = self.get_primary();
primary.cursor = root.byte_offset_to_line_and_col(offset, self.metrics, eol_mode);
if (self.view.is_visible(&primary.cursor))
self.clamp()
else
try self.scroll_view_center(.{});
try self.send_editor_jump_destination();
self.need_render();
}
pub const goto_byte_offset_meta: Meta = .{ .arguments = &.{.integer} };
pub fn goto_definition(self: *Self, _: Context) Result {
const file_path = self.file_path orelse return;
const primary = self.get_primary();

View file

@ -224,6 +224,17 @@ pub const font_test_text: []const u8 =
\\🙂‍↔
\\
\\
\\你好世界 "Hello World"
\\一二三四五六七八九十 "123456789"
\\龍鳳麟龜 (dragon, phoenix, qilin, turtle)
\\Fullwidth numbers:
\\Fullwidth letters:  
\\Fullwidth punctuation:
\\Half-width (normal): ABC 123
\\Full-width (double):  
\\Punctuation: 。,、;:「」『』
\\Symbols: ○●□■△▲☆★◇◆
\\
\\ recommended fonts for terminals with no nerdfont fallback support (e.g. flow-gui):
\\
\\ "IosevkaTerm Nerd Font" => https://github.com/ryanoasis/nerd-fonts/releases/download/v3.3.0/IosevkaTerm.zip

View file

@ -150,10 +150,10 @@ pub fn receive(self: *Self, from_: tp.pid_ref, m: tp.message) error{Exit}!bool {
});
return true;
} else if (try m.match(.{ "navigate_complete", tp.extract(&same_file), tp.extract(&path), tp.extract(&goto_args), tp.extract(&line), tp.extract(&column) })) {
cmds.navigate_complete(self, same_file, path, goto_args, line, column) catch |e| return tp.exit_error(e, @errorReturnTrace());
cmds.navigate_complete(self, same_file, path, goto_args, line, column, null) catch |e| return tp.exit_error(e, @errorReturnTrace());
return true;
} else if (try m.match(.{ "navigate_complete", tp.extract(&same_file), tp.extract(&path), tp.extract(&goto_args), tp.null_, tp.null_ })) {
cmds.navigate_complete(self, same_file, path, goto_args, null, null) catch |e| return tp.exit_error(e, @errorReturnTrace());
cmds.navigate_complete(self, same_file, path, goto_args, null, null, null) catch |e| return tp.exit_error(e, @errorReturnTrace());
return true;
}
return if (try self.floating_views.send(from_, m)) true else self.widgets.send(from_, m);
@ -349,6 +349,7 @@ const cmds = struct {
var file_name: []const u8 = undefined;
var line: ?i64 = null;
var column: ?i64 = null;
var offset: ?i64 = null;
var goto_args: []const u8 = &.{};
var iter = ctx.args.buf;
@ -370,6 +371,9 @@ const cmds = struct {
} else if (std.mem.eql(u8, field_name, "goto")) {
if (!try cbor.matchValue(&iter, cbor.extract_cbor(&goto_args)))
return error.InvalidNavigateGotoArgument;
} else if (std.mem.eql(u8, field_name, "offset")) {
if (!try cbor.matchValue(&iter, cbor.extract(&offset)))
return error.InvalidNavigateOffsetArgument;
} else {
try cbor.skipValue(&iter);
}
@ -392,7 +396,8 @@ const cmds = struct {
if (tui.config().restore_last_cursor_position and
!same_file and
!have_editor_metadata and
line == null)
line == null and
offset == null)
{
const ctx_: struct {
allocator: std.mem.Allocator,
@ -424,11 +429,11 @@ const cmds = struct {
return;
}
return cmds.navigate_complete(self, same_file, f, goto_args, line, column);
return cmds.navigate_complete(self, same_file, f, goto_args, line, column, offset);
}
pub const navigate_meta: Meta = .{ .arguments = &.{.object} };
fn navigate_complete(self: *Self, same_file: bool, f: []const u8, goto_args: []const u8, line: ?i64, column: ?i64) Result {
fn navigate_complete(self: *Self, same_file: bool, f: []const u8, goto_args: []const u8, line: ?i64, column: ?i64, offset: ?i64) Result {
if (!same_file) {
if (self.get_active_editor()) |editor| {
editor.send_editor_jump_source() catch {};
@ -444,6 +449,10 @@ const cmds = struct {
try command.executeName("scroll_view_center", .{});
if (column) |col|
try command.executeName("goto_column", command.fmt(.{col}));
} else if (offset) |o| {
try command.executeName("goto_byte_offset", command.fmt(.{o}));
if (!same_file)
try command.executeName("scroll_view_center", .{});
}
tui.need_render();
}
@ -608,8 +617,10 @@ const cmds = struct {
pub fn delete_buffer(self: *Self, ctx: Ctx) Result {
var file_path: []const u8 = undefined;
if (!(ctx.args.match(.{tp.extract(&file_path)}) catch false))
return error.InvalidDeleteBufferArgument;
if (!(ctx.args.match(.{tp.extract(&file_path)}) catch false)) {
const editor = self.get_active_editor() orelse return error.InvalidDeleteBufferArgument;
file_path = editor.file_path orelse return error.InvalidDeleteBufferArgument;
}
const buffer = self.buffer_manager.get_buffer_for_file(file_path) orelse return;
if (buffer.is_dirty())
return tp.exit("unsaved changes");

View file

@ -1,17 +1,82 @@
const fmt = @import("std").fmt;
const command = @import("command");
const tui = @import("../../tui.zig");
const Cursor = @import("../../editor.zig").Cursor;
pub const Type = @import("numeric_input.zig").Create(@This());
pub const create = Type.create;
pub const ValueType = struct {
cursor: Cursor = .{},
part: enum { row, col } = .row,
};
pub const Separator = ':';
pub fn name(_: *Type) []const u8 {
return "goto";
}
pub fn start(_: *Type) usize {
const editor = tui.get_active_editor() orelse return 1;
return editor.get_primary().cursor.row + 1;
pub fn start(_: *Type) ValueType {
const editor = tui.get_active_editor() orelse return .{};
return .{ .cursor = editor.get_primary().cursor };
}
pub fn process_digit(self: *Type, digit: u8) void {
const part = if (self.input) |input| input.part else .row;
switch (part) {
.row => switch (digit) {
0 => {
if (self.input) |*input| input.cursor.row = input.cursor.row * 10;
},
1...9 => {
if (self.input) |*input| {
input.cursor.row = input.cursor.row * 10 + digit;
} else {
self.input = .{ .cursor = .{ .row = digit } };
}
},
else => unreachable,
},
.col => if (self.input) |*input| {
input.cursor.col = input.cursor.col * 10 + digit;
},
}
}
pub fn process_separator(self: *Type) void {
if (self.input) |*input| switch (input.part) {
.row => input.part = .col,
else => {},
};
}
pub fn delete(self: *Type, input: *ValueType) void {
switch (input.part) {
.row => {
const newval = if (input.cursor.row < 10) 0 else input.cursor.row / 10;
if (newval == 0) self.input = null else input.cursor.row = newval;
},
.col => {
const newval = if (input.cursor.col < 10) 0 else input.cursor.col / 10;
if (newval == 0) {
input.part = .row;
input.cursor.col = 0;
} else input.cursor.col = newval;
},
}
}
pub fn format_value(_: *Type, input: ?ValueType, buf: []u8) []const u8 {
return if (input) |value| blk: {
switch (value.part) {
.row => break :blk fmt.bufPrint(buf, "{d}", .{value.cursor.row}) catch "",
.col => if (value.cursor.col == 0)
break :blk fmt.bufPrint(buf, "{d}:", .{value.cursor.row}) catch ""
else
break :blk fmt.bufPrint(buf, "{d}:{d}", .{ value.cursor.row, value.cursor.col }) catch "",
}
} else "";
}
pub const preview = goto;
@ -19,5 +84,9 @@ pub const apply = goto;
pub const cancel = goto;
fn goto(self: *Type, _: command.Context) void {
command.executeName("goto_line", command.fmt(.{self.input orelse self.start})) catch {};
send_goto(if (self.input) |input| input.cursor else self.start.cursor);
}
fn send_goto(cursor: Cursor) void {
command.executeName("goto_line_and_column", command.fmt(.{ cursor.row, cursor.col })) catch {};
}

View file

@ -0,0 +1,58 @@
const fmt = @import("std").fmt;
const command = @import("command");
const tui = @import("../../tui.zig");
const Cursor = @import("../../editor.zig").Cursor;
pub const Type = @import("numeric_input.zig").Create(@This());
pub const create = Type.create;
pub const ValueType = struct {
cursor: Cursor = .{},
offset: usize = 0,
};
pub fn name(_: *Type) []const u8 {
return "goto byte";
}
pub fn start(_: *Type) ValueType {
const editor = tui.get_active_editor() orelse return .{};
return .{ .cursor = editor.get_primary().cursor };
}
pub fn process_digit(self: *Type, digit: u8) void {
switch (digit) {
0...9 => {
if (self.input) |*input| {
input.offset = input.offset * 10 + digit;
} else {
self.input = .{ .offset = digit };
}
},
else => unreachable,
}
}
pub fn delete(self: *Type, input: *ValueType) void {
const newval = if (input.offset < 10) 0 else input.offset / 10;
if (newval == 0) self.input = null else input.offset = newval;
}
pub fn format_value(_: *Type, input_: ?ValueType, buf: []u8) []const u8 {
return if (input_) |input|
fmt.bufPrint(buf, "{d}", .{input.offset}) catch ""
else
"";
}
pub const preview = goto;
pub const apply = goto;
pub const cancel = goto;
fn goto(self: *Type, _: command.Context) void {
if (self.input) |input|
command.executeName("goto_byte_offset", command.fmt(.{input.offset})) catch {}
else
command.executeName("goto_line_and_column", command.fmt(.{ self.start.cursor.row, self.start.cursor.col })) catch {};
}

View file

@ -18,10 +18,12 @@ pub fn Create(options: type) type {
const Commands = command.Collection(cmds);
const ValueType = if (@hasDecl(options, "ValueType")) options.ValueType else usize;
allocator: Allocator,
buf: [30]u8 = undefined,
input: ?usize = null,
start: usize,
input: ?ValueType = null,
start: ValueType,
ctx: command.Context,
commands: Commands = undefined,
@ -31,7 +33,7 @@ pub fn Create(options: type) type {
self.* = .{
.allocator = allocator,
.ctx = .{ .args = try ctx.args.clone(allocator) },
.start = 0,
.start = if (@hasDecl(options, "ValueType")) ValueType{} else 0,
};
self.start = options.start(self);
try self.commands.init(self);
@ -55,27 +57,42 @@ pub fn Create(options: type) type {
fn update_mini_mode_text(self: *Self) void {
if (tui.mini_mode()) |mini_mode| {
mini_mode.text = if (self.input) |linenum|
(fmt.bufPrint(&self.buf, "{d}", .{linenum}) catch "")
else
"";
if (@hasDecl(options, "format_value")) {
mini_mode.text = options.format_value(self, self.input, &self.buf);
} else {
mini_mode.text = if (self.input) |linenum|
(fmt.bufPrint(&self.buf, "{d}", .{linenum}) catch "")
else
"";
}
mini_mode.cursor = tui.egc_chunk_width(mini_mode.text, 0, 1);
}
}
fn insert_char(self: *Self, char: u8) void {
switch (char) {
'0' => {
if (self.input) |linenum| self.input = linenum * 10;
},
'1'...'9' => {
const digit: usize = @intCast(char - '0');
self.input = if (self.input) |x| x * 10 + digit else digit;
},
else => {},
const process_digit_ = if (@hasDecl(options, "process_digit")) options.process_digit else process_digit;
if (@hasDecl(options, "Separator")) {
switch (char) {
'0'...'9' => process_digit_(self, @intCast(char - '0')),
options.Separator => options.process_separator(self),
else => {},
}
} else {
switch (char) {
'0'...'9' => process_digit_(self, @intCast(char - '0')),
else => {},
}
}
}
fn process_digit(self: *Self, digit: u8) void {
self.input = switch (digit) {
0 => if (self.input) |value| value * 10 else 0,
1...9 => if (self.input) |x| x * 10 + digit else digit,
else => unreachable,
};
}
fn insert_bytes(self: *Self, bytes: []const u8) void {
for (bytes) |c| self.insert_char(c);
}
@ -101,9 +118,13 @@ pub fn Create(options: type) type {
pub const mini_mode_cancel_meta: Meta = .{ .description = "Cancel input" };
pub fn mini_mode_delete_backwards(self: *Self, _: Ctx) Result {
if (self.input) |linenum| {
const newval = if (linenum < 10) 0 else linenum / 10;
self.input = if (newval == 0) null else newval;
if (self.input) |*input| {
if (@hasDecl(options, "delete")) {
options.delete(self, input);
} else {
const newval = if (input.* < 10) 0 else input.* / 10;
self.input = if (newval == 0) null else newval;
}
self.update_mini_mode_text();
options.preview(self, self.ctx);
}

View file

@ -37,9 +37,9 @@ pub fn select(self: *Type) void {
var buf = std.ArrayList(u8).init(self.allocator);
defer buf.deinit();
const file_path = project_manager.expand_home(&buf, self.file_path.items);
command.executeName("exit_mini_mode", .{}) catch {};
if (root.is_directory(file_path))
tp.self_pid().send(.{ "cmd", "change_project", .{file_path} }) catch {}
else if (file_path.len > 0)
tp.self_pid().send(.{ "cmd", "navigate", .{ .file = file_path } }) catch {};
command.executeName("exit_mini_mode", .{}) catch {};
}

View file

@ -117,6 +117,7 @@ fn select(menu: **Type.MenuState, button: *Type.ButtonState) void {
}
const CompletionItemKind = enum(u8) {
None = 0,
Text = 1,
Method = 2,
Function = 3,
@ -146,6 +147,7 @@ const CompletionItemKind = enum(u8) {
fn kind_icon(kind: CompletionItemKind) []const u8 {
return switch (kind) {
.None => " ",
.Text => "󰊄",
.Method => "",
.Function => "󰊕",

View file

@ -51,6 +51,31 @@ const cmds_ = struct {
}
pub const @"e!_meta": Meta = .{ .description = "e! (force reload current file)" };
pub fn bd(_: *void, _: Ctx) Result {
try cmd("close_file", .{});
}
pub const bd_meta: Meta = .{ .description = "bd (Close file)" };
pub fn bw(_: *void, _: Ctx) Result {
try cmd("delete_buffer", .{});
}
pub const bw_meta: Meta = .{ .description = "bw (Delete buffer)" };
pub fn bnext(_: *void, _: Ctx) Result {
try cmd("next_tab", .{});
}
pub const bnext_meta: Meta = .{ .description = "bnext (Next buffer/tab)" };
pub fn bprevious(_: *void, _: Ctx) Result {
try cmd("next_tab", .{});
}
pub const bprevious_meta: Meta = .{ .description = "bprevious (Previous buffer/tab)" };
pub fn ls(_: *void, _: Ctx) Result {
try cmd("switch_buffers", .{});
}
pub const ls_meta: Meta = .{ .description = "ls (List/switch buffers)" };
pub fn move_begin_or_add_integer_argument_zero(_: *void, _: Ctx) Result {
return if (@import("keybind").current_integer_argument()) |_|
command.executeName("add_integer_argument_digit", command.fmt(.{0}))

View file

@ -1034,9 +1034,31 @@ const cmds = struct {
pub const find_in_files_meta: Meta = .{ .description = "Find in files" };
pub fn goto(self: *Self, ctx: Ctx) Result {
return enter_mini_mode(self, @import("mode/mini/goto.zig"), ctx);
var line: usize = undefined;
var column: usize = undefined;
return if (try ctx.args.match(.{tp.extract(&line)}))
command.executeName("goto_line", command.fmt(.{line}))
else if (try ctx.args.match(.{ tp.extract(&line), tp.extract(&column) }))
command.executeName("goto_line_and_column", command.fmt(.{ line, column }))
else
enter_mini_mode(self, @import("mode/mini/goto.zig"), ctx);
}
pub const goto_meta: Meta = .{ .description = "Goto line" };
pub const goto_meta: Meta = .{
.description = "Goto line",
.arguments = &.{ .integer, .integer },
};
pub fn goto_offset(self: *Self, ctx: Ctx) Result {
var offset: usize = undefined;
return if (try ctx.args.match(.{tp.extract(&offset)}))
command.executeName("goto_byte_offset", command.fmt(.{offset}))
else
enter_mini_mode(self, @import("mode/mini/goto_offset.zig"), ctx);
}
pub const goto_offset_meta: Meta = .{
.description = "Goto byte offset",
.arguments = &.{.integer},
};
pub fn move_to_char(self: *Self, ctx: Ctx) Result {
return enter_mini_mode(self, @import("mode/mini/move_to_char.zig"), ctx);

View file

@ -75,6 +75,7 @@ pub fn render(
self: *const DwriteRenderer,
font: Font,
utf8: []const u8,
double_width: bool,
) void {
var utf16_buf: [10]u16 = undefined;
const utf16_len = std.unicode.utf8ToUtf16Le(&utf16_buf, utf8) catch unreachable;
@ -85,7 +86,10 @@ pub fn render(
const rect: win32.D2D_RECT_F = .{
.left = 0,
.top = 0,
.right = @floatFromInt(font.cell_size.x),
.right = if (double_width)
@as(f32, @floatFromInt(font.cell_size.x)) * 2
else
@as(f32, @floatFromInt(font.cell_size.x)),
.bottom = @floatFromInt(font.cell_size.y),
};
self.render_target.BeginDraw();
@ -96,7 +100,7 @@ pub fn render(
self.render_target.DrawText(
@ptrCast(utf16.ptr),
@intCast(utf16.len),
font.text_format,
if (double_width) font.text_format_double else font.text_format_single,
&rect,
&self.white_brush.ID2D1Brush,
.{},

View file

@ -5,9 +5,15 @@ const Node = struct {
prev: ?u32,
next: ?u32,
codepoint: ?u21,
right_half: ?bool,
};
map: std.AutoHashMapUnmanaged(u21, u32) = .{},
const MapKey = struct {
codepoint: u21,
right_half: bool,
};
map: std.AutoHashMapUnmanaged(MapKey, u32) = .{},
nodes: []Node,
front: u32,
back: u32,
@ -25,13 +31,14 @@ pub fn init(allocator: std.mem.Allocator, capacity: u32) error{OutOfMemory}!Glyp
pub fn clearRetainingCapacity(self: *GlyphIndexCache) void {
self.map.clearRetainingCapacity();
self.nodes[0] = .{ .prev = null, .next = 1, .codepoint = null };
self.nodes[self.nodes.len - 1] = .{ .prev = @intCast(self.nodes.len - 2), .next = null, .codepoint = null };
self.nodes[0] = .{ .prev = null, .next = 1, .codepoint = null, .right_half = null };
self.nodes[self.nodes.len - 1] = .{ .prev = @intCast(self.nodes.len - 2), .next = null, .codepoint = null, .right_half = null };
for (self.nodes[1 .. self.nodes.len - 1], 1..) |*node, index| {
node.* = .{
.prev = @intCast(index - 1),
.next = @intCast(index + 1),
.codepoint = null,
.right_half = null,
};
}
self.front = 0;
@ -51,12 +58,12 @@ const Reserved = struct {
index: u32,
replaced: ?u21,
};
pub fn reserve(self: *GlyphIndexCache, allocator: std.mem.Allocator, codepoint: u21) error{OutOfMemory}!union(enum) {
pub fn reserve(self: *GlyphIndexCache, allocator: std.mem.Allocator, codepoint: u21, right_half: bool) error{OutOfMemory}!union(enum) {
newly_reserved: Reserved,
already_reserved: u32,
} {
{
const entry = try self.map.getOrPut(allocator, codepoint);
const entry = try self.map.getOrPut(allocator, .{ .codepoint = codepoint, .right_half = right_half });
if (entry.found_existing) {
self.moveToBack(entry.value_ptr.*);
return .{ .already_reserved = entry.value_ptr.* };
@ -69,7 +76,7 @@ pub fn reserve(self: *GlyphIndexCache, allocator: std.mem.Allocator, codepoint:
const replaced = self.nodes[self.front].codepoint;
self.nodes[self.front].codepoint = codepoint;
if (replaced) |r| {
const removed = self.map.remove(r);
const removed = self.map.remove(.{ .codepoint = r, .right_half = self.nodes[self.front].right_half orelse false });
std.debug.assert(removed);
}
const save_front = self.front;

View file

@ -149,7 +149,7 @@ pub const WindowState = struct {
}
// TODO: this should take a utf8 graphme instead
pub fn generateGlyph(state: *WindowState, font: Font, codepoint: u21) u32 {
pub fn generateGlyph(state: *WindowState, font: Font, codepoint: u21, kind: enum { single, left, right }) u32 {
// for now we'll just use 1 texture and leverage the entire thing
const texture_cell_count: XY(u16) = getD3d11TextureMaxCellCount(font.cell_size);
const texture_cell_count_total: u32 =
@ -184,16 +184,27 @@ pub const WindowState = struct {
break :blk &(state.glyph_index_cache.?);
};
const right_half: bool = switch (kind) {
.single, .left => false,
.right => true,
};
switch (glyph_index_cache.reserve(
global.glyph_cache_arena.allocator(),
codepoint,
right_half,
) catch |e| oom(e)) {
.newly_reserved => |reserved| {
// var render_success = false;
// defer if (!render_success) state.glyph_index_cache.remove(reserved.index);
const pos: XY(u16) = cellPosFromIndex(reserved.index, texture_cell_count.x);
const coord = coordFromCellPos(font.cell_size, pos);
const staging = global.staging_texture.update(font.cell_size);
const staging_size: XY(u16) = .{
// twice the width to handle double-wide glyphs
.x = font.cell_size.x * 2,
.y = font.cell_size.y,
};
const staging = global.staging_texture.update(staging_size);
var utf8_buf: [7]u8 = undefined;
const utf8_len: u3 = std.unicode.utf8Encode(codepoint, &utf8_buf) catch |e| std.debug.panic(
"todo: handle invalid codepoint {} (0x{0x}) ({s})",
@ -202,12 +213,16 @@ pub const WindowState = struct {
staging.text_renderer.render(
font,
utf8_buf[0..utf8_len],
switch (kind) {
.single => false,
.left, .right => true,
},
);
const box: win32.D3D11_BOX = .{
.left = 0,
.left = if (right_half) font.cell_size.x else 0,
.top = 0,
.front = 0,
.right = font.cell_size.x,
.right = if (right_half) font.cell_size.x * 2 else font.cell_size.x,
.bottom = font.cell_size.y,
.back = 1,
};
@ -289,7 +304,7 @@ pub fn paint(
}
const copy_col_count: u16 = @min(col_count, shader_col_count);
const blank_space_glyph_index = state.generateGlyph(font, ' ');
const blank_space_glyph_index = state.generateGlyph(font, ' ', .single);
const cell_count: u32 = @as(u32, shader_col_count) * @as(u32, shader_row_count);
state.shader_cells.updateCount(cell_count);

View file

@ -23,11 +23,13 @@ pub fn init() void {
}
pub const Font = struct {
text_format: *win32.IDWriteTextFormat,
text_format_single: *win32.IDWriteTextFormat,
text_format_double: *win32.IDWriteTextFormat,
cell_size: XY(u16),
pub fn init(dpi: u32, size: f32, face: *const FontFace) Font {
var text_format: *win32.IDWriteTextFormat = undefined;
var text_format_single: *win32.IDWriteTextFormat = undefined;
{
const hr = global.dwrite_factory.CreateTextFormat(
face.ptr(),
@ -37,14 +39,43 @@ pub const Font = struct {
.NORMAL, // stretch
win32.scaleDpi(f32, size, dpi),
win32.L(""), // locale
&text_format,
&text_format_single,
);
if (hr < 0) std.debug.panic(
"CreateTextFormat '{}' height {d} failed, hresult=0x{x}",
.{ std.unicode.fmtUtf16Le(face.slice()), size, @as(u32, @bitCast(hr)) },
);
}
errdefer _ = text_format.IUnknown.Release();
errdefer _ = text_format_single.IUnknown.Release();
var text_format_double: *win32.IDWriteTextFormat = undefined;
{
const hr = global.dwrite_factory.CreateTextFormat(
face.ptr(),
null,
.NORMAL, //weight
.NORMAL, // style
.NORMAL, // stretch
win32.scaleDpi(f32, size, dpi),
win32.L(""), // locale
&text_format_double,
);
if (hr < 0) std.debug.panic(
"CreateTextFormat '{}' height {d} failed, hresult=0x{x}",
.{ std.unicode.fmtUtf16Le(face.slice()), size, @as(u32, @bitCast(hr)) },
);
}
errdefer _ = text_format_double.IUnknown.Release();
{
const hr = text_format_double.SetTextAlignment(win32.DWRITE_TEXT_ALIGNMENT_CENTER);
if (hr < 0) fatalHr("SetTextAlignment", hr);
}
{
const hr = text_format_double.SetParagraphAlignment(win32.DWRITE_PARAGRAPH_ALIGNMENT_CENTER);
if (hr < 0) fatalHr("SetParagraphAlignment", hr);
}
const cell_size: XY(u16) = blk: {
var text_layout: *win32.IDWriteTextLayout = undefined;
@ -52,7 +83,7 @@ pub const Font = struct {
const hr = global.dwrite_factory.CreateTextLayout(
win32.L(""),
1,
text_format,
text_format_single,
std.math.floatMax(f32),
std.math.floatMax(f32),
&text_layout,
@ -73,13 +104,15 @@ pub const Font = struct {
};
return .{
.text_format = text_format,
.text_format_single = text_format_single,
.text_format_double = text_format_double,
.cell_size = cell_size,
};
}
pub fn deinit(self: *Font) void {
_ = self.text_format.IUnknown.Release();
_ = self.text_format_single.IUnknown.Release();
_ = self.text_format_double.IUnknown.Release();
self.* = undefined;
}

View file

@ -1081,19 +1081,31 @@ fn WndProc(
global.render_cells_arena.allocator(),
global.screen.buf.len,
) catch |e| oom(e);
var prev_width: usize = 1;
var prev_cell: render.Cell = undefined;
var prev_codepoint: u21 = undefined;
for (global.screen.buf, global.render_cells.items) |*screen_cell, *render_cell| {
const codepoint = if (std.unicode.utf8ValidateSlice(screen_cell.char.grapheme))
const width = screen_cell.char.width;
// temporary workaround, ignore multi-codepoint graphemes
const codepoint = if (screen_cell.char.grapheme.len > 4)
std.unicode.replacement_character
else if (std.unicode.utf8ValidateSlice(screen_cell.char.grapheme))
std.unicode.wtf8Decode(screen_cell.char.grapheme) catch std.unicode.replacement_character
else
std.unicode.replacement_character;
render_cell.* = .{
.glyph_index = state.render_state.generateGlyph(
font,
codepoint,
),
.background = renderColorFromVaxis(screen_cell.style.bg),
.foreground = renderColorFromVaxis(screen_cell.style.fg),
};
if (prev_width > 1) {
render_cell.* = prev_cell;
render_cell.glyph_index = state.render_state.generateGlyph(font, prev_codepoint, .right);
} else {
render_cell.* = .{
.glyph_index = state.render_state.generateGlyph(font, codepoint, if (width == 1) .single else .left),
.background = renderColorFromVaxis(screen_cell.style.bg),
.foreground = renderColorFromVaxis(screen_cell.style.fg),
};
}
prev_width = width;
prev_cell = render_cell.*;
prev_codepoint = codepoint;
}
render.paint(
&state.render_state,

View file

@ -190,6 +190,14 @@ test "get_byte_pos" {
try std.testing.expectEqual(33, try buffer.root.get_byte_pos(.{ .row = 4, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(66, try buffer.root.get_byte_pos(.{ .row = 8, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(97, try buffer.root.get_byte_pos(.{ .row = 11, .col = 2 }, metrics(), eol_mode));
eol_mode = .crlf;
try std.testing.expectEqual(0, try buffer.root.get_byte_pos(.{ .row = 0, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(10, try buffer.root.get_byte_pos(.{ .row = 1, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(12, try buffer.root.get_byte_pos(.{ .row = 1, .col = 2 }, metrics(), eol_mode));
try std.testing.expectEqual(37, try buffer.root.get_byte_pos(.{ .row = 4, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(74, try buffer.root.get_byte_pos(.{ .row = 8, .col = 0 }, metrics(), eol_mode));
try std.testing.expectEqual(108, try buffer.root.get_byte_pos(.{ .row = 11, .col = 2 }, metrics(), eol_mode));
}
test "delete_bytes" {
@ -406,3 +414,44 @@ test "get_from_pos" {
const result3 = buffer.root.get_from_pos(.{ .row = 1, .col = 5 }, &result_buf, metrics());
try std.testing.expectEqualDeep(result3[0 .. line1.len - 4], line1[4..]);
}
test "byte_offset_to_line_and_col" {
const doc: []const u8 =
\\All your
\\ropes
\\are belong to
\\us!
\\All your
\\ropes
\\are belong to
\\us!
\\All your
\\ropes
\\are belong to
\\us!
;
var eol_mode: Buffer.EolMode = .lf;
var sanitized: bool = false;
const buffer = try Buffer.create(a);
defer buffer.deinit();
buffer.update(try buffer.load_from_string(doc, &eol_mode, &sanitized));
try std.testing.expectEqual(Buffer.Cursor{ .row = 0, .col = 0 }, buffer.root.byte_offset_to_line_and_col(0, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 0, .col = 8 }, buffer.root.byte_offset_to_line_and_col(8, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 1, .col = 0 }, buffer.root.byte_offset_to_line_and_col(9, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 1, .col = 2 }, buffer.root.byte_offset_to_line_and_col(11, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 4, .col = 0 }, buffer.root.byte_offset_to_line_and_col(33, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 8, .col = 0 }, buffer.root.byte_offset_to_line_and_col(66, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 11, .col = 2 }, buffer.root.byte_offset_to_line_and_col(97, metrics(), eol_mode));
eol_mode = .crlf;
try std.testing.expectEqual(Buffer.Cursor{ .row = 0, .col = 0 }, buffer.root.byte_offset_to_line_and_col(0, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 0, .col = 8 }, buffer.root.byte_offset_to_line_and_col(8, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 0, .col = 8 }, buffer.root.byte_offset_to_line_and_col(9, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 1, .col = 0 }, buffer.root.byte_offset_to_line_and_col(10, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 1, .col = 2 }, buffer.root.byte_offset_to_line_and_col(12, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 4, .col = 0 }, buffer.root.byte_offset_to_line_and_col(37, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 8, .col = 0 }, buffer.root.byte_offset_to_line_and_col(74, metrics(), eol_mode));
try std.testing.expectEqual(Buffer.Cursor{ .row = 11, .col = 2 }, buffer.root.byte_offset_to_line_and_col(108, metrics(), eol_mode));
}