Compare commits
198 Commits
syndicate-
...
main
Author | SHA1 | Date |
---|---|---|
Tony Garnock-Jones | 20d8c3cb83 | |
Tony Garnock-Jones | 4c47a620d6 | |
Tony Garnock-Jones | f84405c686 | |
Tony Garnock-Jones | d1cb35dc76 | |
Tony Garnock-Jones | 0e81be663f | |
Tony Garnock-Jones | 3d56309766 | |
Tony Garnock-Jones | 92cc57d2cd | |
Tony Garnock-Jones | 46f4071d4f | |
Tony Garnock-Jones | 57037d6f8c | |
Tony Garnock-Jones | af4cc732d5 | |
Tony Garnock-Jones | 9f847469f2 | |
Tony Garnock-Jones | 6e5428d1d3 | |
Tony Garnock-Jones | bf0c30a160 | |
Tony Garnock-Jones | c989c3a849 | |
Tony Garnock-Jones | b10acaf4a6 | |
Tony Garnock-Jones | b9053ad881 | |
Tony Garnock-Jones | 8311b0a020 | |
Tony Garnock-Jones | 64a4074273 | |
Tony Garnock-Jones | 53d859e50f | |
Tony Garnock-Jones | d301c09b02 | |
Tony Garnock-Jones | 2bff59c41a | |
Tony Garnock-Jones | 39f0e8cdf1 | |
Tony Garnock-Jones | 3e0d6af497 | |
Tony Garnock-Jones | 599b4ed469 | |
Tony Garnock-Jones | 6e555c9fd5 | |
Emery Hemingway | 8ebde104ca | |
Tony Garnock-Jones | 6468e16790 | |
Tony Garnock-Jones | 65101e900e | |
Tony Garnock-Jones | 581886835a | |
Tony Garnock-Jones | dcb1aec142 | |
Tony Garnock-Jones | c0239cf322 | |
Tony Garnock-Jones | 9cc4175f24 | |
Tony Garnock-Jones | 70f42dd931 | |
Tony Garnock-Jones | ef1ebe6412 | |
Tony Garnock-Jones | deec008c66 | |
Tony Garnock-Jones | 008671d0b2 | |
Tony Garnock-Jones | 9fcf22e1b5 | |
Tony Garnock-Jones | ca18ca08df | |
Tony Garnock-Jones | 40ca168eac | |
Tony Garnock-Jones | 5a73e8d4c3 | |
Tony Garnock-Jones | 91b26001d8 | |
Tony Garnock-Jones | b83b39515d | |
Tony Garnock-Jones | d9fa6362af | |
Tony Garnock-Jones | 94598a574b | |
Tony Garnock-Jones | 80ad0914ed | |
Tony Garnock-Jones | bdb0cc1023 | |
Tony Garnock-Jones | 710ff91a64 | |
Tony Garnock-Jones | d3748a286b | |
Tony Garnock-Jones | a56aec2c30 | |
Tony Garnock-Jones | 0c06ae9601 | |
Tony Garnock-Jones | 1f0c9d2883 | |
Tony Garnock-Jones | 615830f799 | |
Tony Garnock-Jones | 3c44768a72 | |
Tony Garnock-Jones | 04bb8c2f23 | |
Tony Garnock-Jones | 9084c1781e | |
Tony Garnock-Jones | 8a817fcb4f | |
Tony Garnock-Jones | 2ed2b38edc | |
Tony Garnock-Jones | 5090625f47 | |
Tony Garnock-Jones | a7ede65bad | |
Tony Garnock-Jones | c59e044695 | |
Tony Garnock-Jones | ef98217a3a | |
Tony Garnock-Jones | bf0d47f1b7 | |
Tony Garnock-Jones | fef41f39eb | |
Tony Garnock-Jones | 0b72b4029b | |
Tony Garnock-Jones | 40a239c9eb | |
Tony Garnock-Jones | 55456621d4 | |
Tony Garnock-Jones | 7797a3cd09 | |
Tony Garnock-Jones | eb9d9bed0f | |
Tony Garnock-Jones | b96c469ef5 | |
Tony Garnock-Jones | 34f611f4fe | |
Tony Garnock-Jones | 58c24c30c4 | |
Tony Garnock-Jones | fa990bc042 | |
Tony Garnock-Jones | 060ba36d2e | |
Tony Garnock-Jones | ecd5e87823 | |
Tony Garnock-Jones | a401e5fcd1 | |
Tony Garnock-Jones | 5db05b2df2 | |
Tony Garnock-Jones | f4a4b4d595 | |
Tony Garnock-Jones | b7d4bd4b58 | |
Tony Garnock-Jones | 41cf85f865 | |
Tony Garnock-Jones | 4fcb14d63e | |
Tony Garnock-Jones | b4f355aa0d | |
Tony Garnock-Jones | 5a431b2060 | |
Tony Garnock-Jones | 1ff222b291 | |
Tony Garnock-Jones | e501d0f76a | |
Tony Garnock-Jones | 2e65d31d5d | |
Tony Garnock-Jones | 852f0f4722 | |
Tony Garnock-Jones | 9850c73993 | |
Tony Garnock-Jones | 9864ce0ec8 | |
Tony Garnock-Jones | 19b1e84e43 | |
Tony Garnock-Jones | 3649cc1237 | |
Tony Garnock-Jones | 0f2d9239f9 | |
Tony Garnock-Jones | 0514f11d0f | |
Tony Garnock-Jones | 12428bbdf6 | |
Tony Garnock-Jones | 5dd68e87c1 | |
Tony Garnock-Jones | e2a32b891d | |
Tony Garnock-Jones | 461ac034f8 | |
Tony Garnock-Jones | 19cbceda7a | |
Tony Garnock-Jones | 97876335ba | |
Tony Garnock-Jones | d7b330e6dd | |
Tony Garnock-Jones | 3cbe17790d | |
Tony Garnock-Jones | 1d97ed1b55 | |
Tony Garnock-Jones | 15914aa153 | |
Tony Garnock-Jones | 4f42bbe7b6 | |
Tony Garnock-Jones | 9c32a4a4b8 | |
Tony Garnock-Jones | 56f04786ab | |
Tony Garnock-Jones | 545e247c21 | |
Tony Garnock-Jones | 06f16d42ec | |
Tony Garnock-Jones | fe861e516f | |
Tony Garnock-Jones | 13c841ce6e | |
Tony Garnock-Jones | 9ae1be6f56 | |
Tony Garnock-Jones | 9786bcb285 | |
Tony Garnock-Jones | abb2978b9a | |
Tony Garnock-Jones | b1e20ac706 | |
Tony Garnock-Jones | 34b59cff3b | |
Tony Garnock-Jones | d514a5178f | |
Tony Garnock-Jones | e88c335735 | |
Tony Garnock-Jones | a38765affa | |
Tony Garnock-Jones | 65dae05890 | |
Tony Garnock-Jones | 090ac8780f | |
Tony Garnock-Jones | bbaacd3038 | |
Tony Garnock-Jones | 1d61ea0c8e | |
Tony Garnock-Jones | 1e9e60207b | |
Tony Garnock-Jones | 702057023d | |
Tony Garnock-Jones | 1f7930d31a | |
Tony Garnock-Jones | 764fb3b866 | |
Tony Garnock-Jones | 726265132f | |
Tony Garnock-Jones | f6b6dd25f1 | |
Tony Garnock-Jones | 94c7de2a08 | |
Tony Garnock-Jones | e4c2634088 | |
Tony Garnock-Jones | cbaeba7bba | |
Tony Garnock-Jones | f8c76e9230 | |
Tony Garnock-Jones | fe9ceaf65c | |
Tony Garnock-Jones | 60e6c6badf | |
Tony Garnock-Jones | 2bf2e29dc2 | |
Tony Garnock-Jones | 9a148ecfcc | |
Tony Garnock-Jones | 2104bc1ff0 | |
Tony Garnock-Jones | 17a9c96342 | |
Tony Garnock-Jones | 3c4ba48624 | |
Tony Garnock-Jones | e063a3f84d | |
Tony Garnock-Jones | 72566ac223 | |
Tony Garnock-Jones | 4e30ef48dc | |
Tony Garnock-Jones | d66840bae7 | |
Tony Garnock-Jones | 768fdd6448 | |
Tony Garnock-Jones | 8055895319 | |
Tony Garnock-Jones | a83999d6ed | |
Tony Garnock-Jones | 1f7b7a02b1 | |
Tony Garnock-Jones | 24b6217897 | |
Tony Garnock-Jones | d517fc4e92 | |
Tony Garnock-Jones | a0c40eadd0 | |
Tony Garnock-Jones | fc420d1a86 | |
Tony Garnock-Jones | f3e5652eee | |
Tony Garnock-Jones | 538ad4244c | |
Tony Garnock-Jones | 1cb2eba0e4 | |
Tony Garnock-Jones | a9971fc35a | |
Tony Garnock-Jones | 8dead81cef | |
Tony Garnock-Jones | 16681841a7 | |
Tony Garnock-Jones | 97fdfe6136 | |
Tony Garnock-Jones | c26b67f286 | |
Tony Garnock-Jones | 65db64fce1 | |
Tony Garnock-Jones | 0432f8a04a | |
Tony Garnock-Jones | dd69d5caaa | |
Tony Garnock-Jones | e6bc6d091f | |
Tony Garnock-Jones | 4c9505d28e | |
Tony Garnock-Jones | a74cd19526 | |
Tony Garnock-Jones | 5f3558817e | |
Tony Garnock-Jones | b4a3f743b5 | |
Tony Garnock-Jones | a340b127d7 | |
Tony Garnock-Jones | 08486b4b1c | |
Tony Garnock-Jones | d8a139b23a | |
Tony Garnock-Jones | 990f3fe4cb | |
Tony Garnock-Jones | 3a3c3c0ee4 | |
Tony Garnock-Jones | 46fd2dec3b | |
Tony Garnock-Jones | 7d7b3135ba | |
Tony Garnock-Jones | 06d52c43da | |
Tony Garnock-Jones | 1ae2583414 | |
Tony Garnock-Jones | 4dca1b1615 | |
Tony Garnock-Jones | 45406c75ac | |
Tony Garnock-Jones | f3c9662607 | |
Tony Garnock-Jones | f134d0227d | |
Tony Garnock-Jones | 82624d3007 | |
Tony Garnock-Jones | 8de00045e6 | |
Tony Garnock-Jones | 8b690b9103 | |
Tony Garnock-Jones | f8d1acfa3e | |
Tony Garnock-Jones | 5a52f243e5 | |
Tony Garnock-Jones | 6224baa2b6 | |
Tony Garnock-Jones | 00c99d96df | |
Tony Garnock-Jones | 6ec6bbaf41 | |
Tony Garnock-Jones | ddc94bfa60 | |
Tony Garnock-Jones | 8619342e5e | |
Tony Garnock-Jones | 5bcb268ff8 | |
Tony Garnock-Jones | 7e8dcef0e2 | |
Tony Garnock-Jones | 9a5d452754 | |
Tony Garnock-Jones | 9cd2e6776c | |
Tony Garnock-Jones | c0d4b535a3 | |
Tony Garnock-Jones | 3c1cb11779 | |
Tony Garnock-Jones | a086c1d721 | |
Tony Garnock-Jones | bc41182533 | |
Tony Garnock-Jones | 2ad99b56b8 |
|
@ -0,0 +1,24 @@
|
|||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
build:
|
||||
runs-on: docker
|
||||
container:
|
||||
image: git.syndicate-lang.org/syndicate-lang/rust-builder:latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: CROSS_CONTAINER_IN_CONTAINER=true make ci-release
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: syndicate-server-x86_64
|
||||
path: target/dist/x86_64
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: syndicate-server-aarch64
|
||||
path: target/dist/aarch64
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: syndicate-server-armv7
|
||||
path: target/dist/armv7
|
|
@ -0,0 +1,7 @@
|
|||
FROM rust:latest
|
||||
RUN cargo install cross
|
||||
|
||||
# This is necessary for cross to be able to access /var/run/docker.sock
|
||||
COPY --from=docker:dind /usr/local/bin/docker /usr/local/bin/
|
||||
|
||||
RUN curl -fsSL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh && bash nodesource_setup.sh && rm -f nodesource_setup.sh && apt -y install nodejs && apt clean
|
|
@ -0,0 +1,11 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# You need to have already logged in:
|
||||
#
|
||||
# docker login git.syndicate-lang.org
|
||||
#
|
||||
# Use a token with read-only access to user scope, this seems to be sufficient (!)
|
||||
|
||||
set -e
|
||||
docker build -t git.syndicate-lang.org/syndicate-lang/rust-builder .
|
||||
docker push git.syndicate-lang.org/syndicate-lang/rust-builder
|
File diff suppressed because it is too large
Load Diff
|
@ -2,6 +2,7 @@ cargo-features = ["strip"]
|
|||
|
||||
[workspace]
|
||||
members = [
|
||||
"syndicate-schema-plugin",
|
||||
"syndicate",
|
||||
"syndicate-macros",
|
||||
"syndicate-server",
|
||||
|
@ -25,3 +26,9 @@ strip = true
|
|||
|
||||
[profile.bench]
|
||||
debug = true
|
||||
|
||||
# [patch.crates-io]
|
||||
# # Unfortunately, until [1] is fixed (perhaps via [2]), we have to use a patched proc-macro2.
|
||||
# # [1]: https://github.com/dtolnay/proc-macro2/issues/402
|
||||
# # [2]: https://github.com/dtolnay/proc-macro2/pull/407
|
||||
# proc-macro2 = { git = "https://github.com/tonyg/proc-macro2", branch = "repair_span_start_end" }
|
||||
|
|
37
Makefile
37
Makefile
|
@ -1,3 +1,5 @@
|
|||
__ignored__ := $(shell ./setup.sh)
|
||||
|
||||
# Use cargo release to manage publication and versions etc.
|
||||
#
|
||||
# cargo install cargo-release
|
||||
|
@ -15,22 +17,24 @@ ws-bump:
|
|||
cargo workspaces version \
|
||||
--no-global-tag \
|
||||
--individual-tag-prefix '%n-v' \
|
||||
--allow-branch 'main'
|
||||
--allow-branch 'main' \
|
||||
$(BUMP_ARGS)
|
||||
|
||||
ws-publish:
|
||||
cargo workspaces publish \
|
||||
--from-git
|
||||
|
||||
PROTOCOLS_BRANCH=main
|
||||
pull-protocols:
|
||||
git subtree pull -P syndicate/protocols \
|
||||
-m 'Merge latest changes from the syndicate-protocols repository' \
|
||||
git@git.syndicate-lang.org:syndicate-lang/syndicate-protocols \
|
||||
main
|
||||
$(PROTOCOLS_BRANCH)
|
||||
|
||||
static: static-x86_64
|
||||
|
||||
static-%:
|
||||
cross build --target $*-unknown-linux-musl --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.$* cross build --target $*-unknown-linux-musl --features vendored-openssl,jemalloc
|
||||
|
||||
###########################################################################
|
||||
|
||||
|
@ -52,28 +56,35 @@ static-%:
|
|||
x86_64-binary: x86_64-binary-release
|
||||
|
||||
x86_64-binary-release:
|
||||
cross build --target x86_64-unknown-linux-musl --release --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.x86_64 cross build --target x86_64-unknown-linux-musl --release --all-targets --features vendored-openssl,jemalloc
|
||||
|
||||
x86_64-binary-debug:
|
||||
cross build --target x86_64-unknown-linux-musl --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.x86_64 cross build --target x86_64-unknown-linux-musl --all-targets --features vendored-openssl
|
||||
|
||||
armv7-binary: armv7-binary-release
|
||||
|
||||
armv7-binary-release:
|
||||
cross build --target=armv7-unknown-linux-musleabihf --release --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.armv7 cross build --target=armv7-unknown-linux-musleabihf --release --all-targets --features vendored-openssl
|
||||
|
||||
armv7-binary-debug:
|
||||
cross build --target=armv7-unknown-linux-musleabihf --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.armv7 cross build --target=armv7-unknown-linux-musleabihf --all-targets --features vendored-openssl
|
||||
|
||||
# Hack to workaround https://github.com/rust-embedded/cross/issues/598
|
||||
HACK_WORKAROUND_ISSUE_598=CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS="-C link-arg=/usr/local/aarch64-linux-musl/lib/libc.a"
|
||||
# As of 2023-05-12 (and probably earlier!) this is no longer required with current Rust nightlies
|
||||
# # Hack to workaround https://github.com/rust-embedded/cross/issues/598
|
||||
# HACK_WORKAROUND_ISSUE_598=CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS="-C link-arg=/usr/local/aarch64-linux-musl/lib/libc.a"
|
||||
|
||||
aarch64-binary: aarch64-binary-release
|
||||
|
||||
aarch64-binary-release:
|
||||
$(HACK_WORKAROUND_ISSUE_598) \
|
||||
cross build --target=aarch64-unknown-linux-musl --release --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.aarch64 cross build --target=aarch64-unknown-linux-musl --release --all-targets --features vendored-openssl,jemalloc
|
||||
|
||||
aarch64-binary-debug:
|
||||
$(HACK_WORKAROUND_ISSUE_598) \
|
||||
cross build --target=aarch64-unknown-linux-musl --all-targets --features vendored-openssl
|
||||
CARGO_TARGET_DIR=target/target.aarch64 cross build --target=aarch64-unknown-linux-musl --all-targets --features vendored-openssl
|
||||
|
||||
ci-release: x86_64-binary-release aarch64-binary-release armv7-binary-release
|
||||
rm -rf target/dist
|
||||
for arch in x86_64 aarch64 armv7; do \
|
||||
mkdir -p target/dist/$$arch; \
|
||||
cp -a target/target.$$arch/$$arch-unknown-linux-musl*/release/syndicate-macaroon target/dist/$$arch; \
|
||||
cp -a target/target.$$arch/$$arch-unknown-linux-musl*/release/syndicate-server target/dist/$$arch; \
|
||||
done
|
||||
|
|
22
README.md
22
README.md
|
@ -23,16 +23,30 @@ A Rust implementation of:
|
|||
|
||||
## Quickstart
|
||||
|
||||
From docker or podman:
|
||||
|
||||
docker run -it --rm leastfixedpoint/syndicate-server /syndicate-server -p 8001
|
||||
|
||||
Build and run from source:
|
||||
|
||||
git clone https://git.syndicate-lang.org/syndicate-lang/syndicate-rs
|
||||
cd syndicate-rs
|
||||
cargo build --release
|
||||
./target/release/syndicate-server -p 8001
|
||||
|
||||
If you have [`mold`](https://github.com/rui314/mold) available (`apt install mold`), you may be
|
||||
able to get faster linking by creating `.cargo/config.toml` as follows:
|
||||
|
||||
[build]
|
||||
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
|
||||
|
||||
Enabling the `jemalloc` feature can get a *substantial* (~20%-50%) improvement in throughput.
|
||||
|
||||
## Running the examples
|
||||
|
||||
In one window, start the server:
|
||||
In one window, start the server with a basic configuration:
|
||||
|
||||
./target/release/syndicate-server -p 8001
|
||||
./target/release/syndicate-server -c dev-scripts/benchmark-config.pr
|
||||
|
||||
Then, choose one of the examples below.
|
||||
|
||||
|
@ -70,7 +84,7 @@ about who kicks off the pingpong session.
|
|||
You may find better performance by restricting the server to fewer
|
||||
cores than you have available. For example, for me, running
|
||||
|
||||
taskset -c 0,1 ./target/release/syndicate-server -p 8001
|
||||
taskset -c 0,1 ./target/release/syndicate-server -c dev-scripts/benchmark-config.pr
|
||||
|
||||
roughly *quadruples* throughput for a single producer/consumer pair,
|
||||
roughly *doubles* throughput for a single producer/consumer pair,
|
||||
on my 48-core AMD CPU.
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
let ?root_ds = dataspace
|
||||
<require-service <relay-listener <tcp "0.0.0.0" 9001> $gatekeeper>>
|
||||
<bind "syndicate" #x"" $root_ds>
|
||||
<bind <ref { oid: "syndicate" key: #x"" }> $root_ds #f>
|
||||
|
|
|
@ -1,2 +1,7 @@
|
|||
#!/bin/sh
|
||||
make -C ../syndicate-server binary && exec taskset -c 0,1 ../target/release/syndicate-server -c benchmark-config.pr "$@"
|
||||
TASKSET='taskset -c 0,1'
|
||||
if [ $(uname -s) = 'Darwin' ]
|
||||
then
|
||||
TASKSET=
|
||||
fi
|
||||
make -C ../syndicate-server binary && exec $TASKSET ../target/release/syndicate-server -c benchmark-config.pr "$@"
|
||||
|
|
|
@ -1 +1 @@
|
|||
syndicate-server
|
||||
syndicate-server.*
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
FROM busybox
|
||||
RUN mkdir /data
|
||||
COPY ./syndicate-server /
|
||||
CMD ["/syndicate-server", "-c", "/data"]
|
||||
ARG TARGETARCH
|
||||
COPY ./syndicate-server.$TARGETARCH /syndicate-server
|
||||
EXPOSE 1
|
||||
CMD ["/syndicate-server", "-c", "/data", "-p", "1"]
|
||||
|
|
|
@ -1,18 +1,37 @@
|
|||
U=leastfixedpoint
|
||||
I=syndicate-server
|
||||
ARCHITECTURES:=amd64 arm arm64
|
||||
SERVERS:=$(patsubst %,syndicate-server.%,$(ARCHITECTURES))
|
||||
VERSION=$(shell ./syndicate-server.$(shell ./docker-architecture $$(uname -m)) --version | cut -d' ' -f2)
|
||||
|
||||
all:
|
||||
|
||||
.PHONY: all clean image push syndicate-server
|
||||
.PHONY: all clean image push push-only
|
||||
|
||||
clean:
|
||||
rm -f syndicate-server
|
||||
docker rmi leastfixedpoint/syndicate-server
|
||||
rm -f syndicate-server.*
|
||||
-podman images -q $(U)/$(I) | sort -u | xargs podman rmi -f
|
||||
|
||||
image: syndicate-server
|
||||
docker build -t leastfixedpoint/$$(./syndicate-server --version | tr ' ' ':') -t leastfixedpoint/syndicate-server:latest .
|
||||
image: $(SERVERS)
|
||||
for A in $(ARCHITECTURES); do set -x; \
|
||||
podman build --platform=linux/$$A \
|
||||
-t $(U)/$(I):$(VERSION)-$$A \
|
||||
-t $(U)/$(I):latest-$$A \
|
||||
.; \
|
||||
done
|
||||
rm -f tmp.image
|
||||
|
||||
push: image
|
||||
docker push leastfixedpoint/$$(./syndicate-server --version | tr ' ' ':')
|
||||
docker push leastfixedpoint/syndicate-server:latest
|
||||
push: image push-only
|
||||
|
||||
syndicate-server:
|
||||
make -C .. x86_64-binary-release
|
||||
cp -a ../target/x86_64-unknown-linux-musl/release/syndicate-server $@
|
||||
push-only:
|
||||
$(patsubst %,podman push $(U)/$(I):$(VERSION)-%;,$(ARCHITECTURES))
|
||||
$(patsubst %,podman push $(U)/$(I):latest-%;,$(ARCHITECTURES))
|
||||
podman rmi -f $(U)/$(I):$(VERSION) $(U)/$(I):latest
|
||||
podman manifest create $(U)/$(I):$(VERSION) $(patsubst %,$(U)/$(I):$(VERSION)-%,$(ARCHITECTURES))
|
||||
podman manifest create $(U)/$(I):latest $(patsubst %,$(U)/$(I):latest-%,$(ARCHITECTURES))
|
||||
podman manifest push $(U)/$(I):$(VERSION)
|
||||
podman manifest push $(U)/$(I):latest
|
||||
|
||||
syndicate-server.%:
|
||||
make -C .. $$(./alpine-architecture $*)-binary-release
|
||||
cp -a ../target/target.$$(./alpine-architecture $*)/$$(./alpine-architecture $*)-unknown-linux-musl*/release/syndicate-server $@
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
# Docker images for syndicate-server
|
||||
|
||||
Build using podman:
|
||||
|
||||
apt install podman
|
||||
|
||||
and at least until the dependencies are fixed (?),
|
||||
|
||||
apt install uidmap slirp4netns
|
|
@ -0,0 +1,6 @@
|
|||
#!/bin/sh
|
||||
case $1 in
|
||||
amd64) echo x86_64;;
|
||||
arm) echo armv7;;
|
||||
arm64) echo aarch64;;
|
||||
esac
|
|
@ -0,0 +1,6 @@
|
|||
#!/bin/sh
|
||||
case $1 in
|
||||
x86_64) echo amd64;;
|
||||
armv7) echo arm;;
|
||||
aarch64) echo arm64;;
|
||||
esac
|
|
@ -0,0 +1,9 @@
|
|||
version: "3"
|
||||
|
||||
services:
|
||||
syndicate:
|
||||
image: leastfixedpoint/syndicate-server
|
||||
ports:
|
||||
- "1:1"
|
||||
volumes:
|
||||
- "/etc/syndicate:/data"
|
|
@ -9,3 +9,4 @@ buildtag() {
|
|||
git tag "$(buildtag syndicate/Cargo.toml)"
|
||||
git tag "$(buildtag syndicate-macros/Cargo.toml)"
|
||||
git tag "$(buildtag syndicate-server/Cargo.toml)"
|
||||
git tag "$(buildtag syndicate-tools/Cargo.toml)"
|
||||
|
|
|
@ -0,0 +1,152 @@
|
|||
# We will create a TCP listener on port 9222, which speaks unencrypted
|
||||
# protocol and allows interaction with the default/system gatekeeper, which
|
||||
# has a single noise binding for introducing encrypted interaction with a
|
||||
# *second* gatekeeper, which finally allows resolution of references to
|
||||
# other objects.
|
||||
|
||||
# First, build a space where we place bindings for the inner gatekeeper to
|
||||
# expose.
|
||||
let ?inner-bindings = dataspace
|
||||
|
||||
# Next, start the inner gatekeeper.
|
||||
<require-service <gatekeeper $inner-bindings>>
|
||||
? <service-object <gatekeeper $inner-bindings> ?inner-gatekeeper> [
|
||||
# Expose it via a noise binding at the outer/system gatekeeper.
|
||||
<bind <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
|
||||
secretKey: #[qLkyuJw/K4yobr4XVKExbinDwEx9QTt9PfDWyx14/kg],
|
||||
service: world }>
|
||||
$inner-gatekeeper #f>
|
||||
]
|
||||
|
||||
# Now, expose the outer gatekeeper to the world, via TCP. The system
|
||||
# gatekeeper is a primordial syndicate-server object bound to $gatekeeper.
|
||||
<require-service <relay-listener <tcp "0.0.0.0" 9222> $gatekeeper>>
|
||||
|
||||
# Finally, let's expose some behaviour accessible via the inner gatekeeper.
|
||||
#
|
||||
# We will create a service dataspace called $world.
|
||||
let ?world = dataspace
|
||||
|
||||
# Running `syndicate-macaroon mint --oid a-service --phrase hello` yields:
|
||||
#
|
||||
# <ref {oid: a-service, sig: #[JTTGQeYCgohMXW/2S2XH8g]}>
|
||||
#
|
||||
# That's a root capability for the service. We use the corresponding
|
||||
# sturdy.SturdyDescriptionDetail to bind it to $world.
|
||||
#
|
||||
$inner-bindings += <bind <ref {oid: a-service, key: #"hello"}>
|
||||
$world #f>
|
||||
|
||||
# Now, we can hand out paths to our services involving an initial noise
|
||||
# step and a subsequent sturdyref/macaroon step.
|
||||
#
|
||||
# For example, running `syndicate-macaroon` like this:
|
||||
#
|
||||
# syndicate-macaroon mint --oid a-service --phrase hello \
|
||||
# --caveat '<rewrite <bind <_>> <rec labelled [<lit "alice"> <ref 0>]>>'
|
||||
#
|
||||
# generates
|
||||
#
|
||||
# <ref {caveats: [<rewrite <bind <_>> <rec labelled [<lit "alice">, <ref 0>]>>],
|
||||
# oid: a-service,
|
||||
# sig: #[CXn7+rAoO3Xr6Y6Laap3OA]}>
|
||||
#
|
||||
# which is an attenuation of the root capability we bound that wraps all
|
||||
# assertions and messages in a `<labelled "alice" _>` wrapper.
|
||||
#
|
||||
# All together, the `gatekeeper.Route` that Alice would use would be
|
||||
# something like:
|
||||
#
|
||||
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
|
||||
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
|
||||
# service: world }>
|
||||
# <ref { caveats: [<rewrite <bind <_>> <rec labelled [<lit "alice">, <ref 0>]>>],
|
||||
# oid: a-service,
|
||||
# sig: #[CXn7+rAoO3Xr6Y6Laap3OA] }>>
|
||||
#
|
||||
# Here's one for "bob":
|
||||
#
|
||||
# syndicate-macaroon mint --oid a-service --phrase hello \
|
||||
# --caveat '<rewrite <bind <_>> <rec labelled [<lit "bob"> <ref 0>]>>'
|
||||
#
|
||||
# <ref {caveats: [<rewrite <bind <_>> <rec labelled [<lit "bob">, <ref 0>]>>],
|
||||
# oid: a-service,
|
||||
# sig: #[/75BbF77LOiqNcvpzNHf0g]}>
|
||||
#
|
||||
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
|
||||
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
|
||||
# service: world }>
|
||||
# <ref { caveats: [<rewrite <bind <_>> <rec labelled [<lit "bob">, <ref 0>]>>],
|
||||
# oid: a-service,
|
||||
# sig: #[/75BbF77LOiqNcvpzNHf0g] }>>
|
||||
#
|
||||
# We relay labelled to unlabelled information, enacting a chat protocol
|
||||
# that enforces usernames.
|
||||
$world [
|
||||
|
||||
# Assertions of presence have the username wiped out and replaced with the label.
|
||||
? <labelled ?who <Present _>> <Present $who>
|
||||
|
||||
# Likewise utterance messages.
|
||||
?? <labelled ?who <Says _ ?what>> ! <Says $who $what>
|
||||
|
||||
# We allow anyone to subscribe to presence and utterances.
|
||||
? <labelled _ <Observe <rec Present ?p> ?o>> <Observe <rec Present $p> $o>
|
||||
? <labelled _ <Observe <rec Says ?p> ?o>> <Observe <rec Says $p> $o>
|
||||
|
||||
]
|
||||
|
||||
# We can also use sturdyref rewrites to directly handle `Says` and
|
||||
# `Present` values, rather than wrapping with `<labelled ...>` and
|
||||
# unwrapping using the script fragment just above.
|
||||
#
|
||||
# The multiply-quoted patterns in the `Observe` cases start to get unwieldy
|
||||
# at this point!
|
||||
#
|
||||
# For Alice:
|
||||
#
|
||||
# syndicate-macaroon mint --oid a-service --phrase hello --caveat '<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
|
||||
# <rewrite <rec Says [<_> <bind String>]> <rec Says [<lit "alice"> <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present> <_>]> <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says> <_>]> <_>]>> <ref 0>>
|
||||
# ]>'
|
||||
#
|
||||
# <ref { oid: a-service sig: #[s918Jk6As8AWJ9rtozOTlg] caveats: [<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
|
||||
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "alice">, <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>
|
||||
#
|
||||
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
|
||||
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
|
||||
# service: world }>
|
||||
# <ref { oid: a-service sig: #[s918Jk6As8AWJ9rtozOTlg] caveats: [<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
|
||||
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "alice">, <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>>
|
||||
#
|
||||
# For Bob:
|
||||
#
|
||||
# syndicate-macaroon mint --oid a-service --phrase hello --caveat '<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
|
||||
# <rewrite <rec Says [<_> <bind String>]> <rec Says [<lit "bob"> <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present> <_>]> <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says> <_>]> <_>]>> <ref 0>>
|
||||
# ]>'
|
||||
#
|
||||
# <ref { oid: a-service sig: #[QBbV4LrS0i3BG6OyCPJl+A] caveats: [<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
|
||||
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "bob">, <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>
|
||||
#
|
||||
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
|
||||
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
|
||||
# service: world }>
|
||||
# <ref { oid: a-service sig: #[QBbV4LrS0i3BG6OyCPJl+A] caveats: [<or [
|
||||
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
|
||||
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "bob">, <ref 0>]>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
|
||||
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>>
|
|
@ -0,0 +1,23 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
exec 1>&2
|
||||
|
||||
failed=
|
||||
cmp_and_fail() {
|
||||
if ! cmp "$1" "$2"
|
||||
then
|
||||
failed=failed
|
||||
fi
|
||||
}
|
||||
|
||||
COMMAND=cmp_and_fail
|
||||
if [ "$1" = "--fix" ];
|
||||
then
|
||||
COMMAND=cp
|
||||
fi
|
||||
|
||||
# Ensure that various copies of cross-package data are identical.
|
||||
${COMMAND} syndicate/protocols/schema-bundle.bin syndicate-schema-plugin/schema-bundle.bin
|
||||
|
||||
[ -z "$failed" ]
|
|
@ -0,0 +1,65 @@
|
|||
# We use $root_ds as the httpd space.
|
||||
let ?root_ds = dataspace
|
||||
|
||||
# Supplying $root_ds as the last parameter in this relay-listener enables httpd service.
|
||||
<require-service <relay-listener <tcp "0.0.0.0" 9001> $gatekeeper $root_ds>>
|
||||
|
||||
# Regular gatekeeper stuff works too.
|
||||
<bind <ref { oid: "syndicate" key: #x"" }> $root_ds #f>
|
||||
|
||||
# Create an httpd router monitoring $root_ds for requests and bind requests.
|
||||
<require-service <http-router $root_ds>>
|
||||
|
||||
# Create a static file server. When it gets a request, it ignores the first n (here, 1)
|
||||
# elements of the path, and takes the remainder as relative to its configured directory (here,
|
||||
# ".").
|
||||
#
|
||||
<require-service <http-static-files "." 1>>
|
||||
#
|
||||
# It publishes a service object: requests should be asserted to this.
|
||||
# The http-bind record establishes this mapping.
|
||||
#
|
||||
? <service-object <http-static-files "." 1> ?handler> [
|
||||
$root_ds += <http-bind #f 9001 get ["files" ...] $handler>
|
||||
]
|
||||
|
||||
# Separately, bind path /d to $index, and respond there.
|
||||
#
|
||||
let ?index = dataspace
|
||||
$root_ds += <http-bind #f 9001 get ["d"] $index>
|
||||
$index ? <request _ ?k> [
|
||||
$k ! <status 200 "OK">
|
||||
$k ! <header content-type "text/html">
|
||||
$k ! <chunk "<!DOCTYPE html>">
|
||||
$k ! <done "<html><body>D</body></html>">
|
||||
]
|
||||
|
||||
# Similarly, bind three paths, /d, /e and /t to $index2
|
||||
# Because /d doubles up, the httpd router gives a warning when it is accessed.
|
||||
# Accessing /e works fine.
|
||||
# Accessing /t results in wasted work because of the hijacking listeners below.
|
||||
#
|
||||
let ?index2 = dataspace
|
||||
$root_ds += <http-bind #f 9001 get ["d"] $index2>
|
||||
$root_ds += <http-bind #f 9001 get ["e"] $index2>
|
||||
$root_ds += <http-bind #f 9001 get ["t"] $index2>
|
||||
$index2 ? <request _ ?k> [
|
||||
$k ! <status 200 "OK">
|
||||
$k ! <header content-type "text/html">
|
||||
$k ! <chunk "<!DOCTYPE html>">
|
||||
$k ! <done "<html><body>D2</body></html>">
|
||||
]
|
||||
|
||||
# These two hijack /t by listening for raw incoming requests the same way the httpd router
|
||||
# does. They respond quicker and so win the race. The httpd router's responses are lost.
|
||||
#
|
||||
$root_ds ? <request <http-request _ _ _ get ["t"] _ _ _> ?k> [
|
||||
$k ! <status 200 "OK">
|
||||
$k ! <header content-type "text/html">
|
||||
$k ! <done "<html><body>T</body></html>">
|
||||
]
|
||||
$root_ds ? <request <http-request _ _ _ get ["t"] _ _ _> ?k> [
|
||||
$k ! <status 200 "OK">
|
||||
$k ! <header content-type "text/html">
|
||||
$k ! <done "<html><body>T2</body></html>">
|
||||
]
|
|
@ -0,0 +1,4 @@
|
|||
#!/bin/sh
|
||||
set -e
|
||||
rustup update
|
||||
cargo +nightly install --path `pwd`/syndicate-server
|
|
@ -0,0 +1,18 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# Set up a git checkout of this repository for local dev use.
|
||||
|
||||
exec 2>/dev/tty 1>&2
|
||||
|
||||
set -e
|
||||
|
||||
[ -d .git ] || exit 0
|
||||
|
||||
for fullhook in ./git-hooks/*
|
||||
do
|
||||
hook=$(basename "$fullhook")
|
||||
[ -L .git/hooks/$hook ] || (
|
||||
echo "Installing $hook hook"
|
||||
ln -s ../../git-hooks/$hook .git/hooks/$hook
|
||||
)
|
||||
done
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "syndicate-macros"
|
||||
version = "0.22.0"
|
||||
version = "0.33.0"
|
||||
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
|
||||
edition = "2018"
|
||||
|
||||
|
@ -13,11 +13,11 @@ license = "Apache-2.0"
|
|||
proc-macro = true
|
||||
|
||||
[dependencies]
|
||||
syndicate = { path = "../syndicate", version = "0.27.0"}
|
||||
syndicate = { path = "../syndicate", version = "0.41.0"}
|
||||
|
||||
proc-macro2 = { version = "^1.0", features = ["span-locations"] }
|
||||
quote = "^1.0"
|
||||
syn = "^1.0"
|
||||
syn = { version = "^1.0", features = ["extra-traits"] } # for impl Debug for syn::Expr
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { version = "1.10", features = ["io-std"] }
|
||||
|
|
|
@ -0,0 +1,133 @@
|
|||
use syndicate::actor::*;
|
||||
use std::env;
|
||||
use std::sync::Arc;
|
||||
|
||||
#[derive(Debug)]
|
||||
enum Instruction {
|
||||
SetPeer(Arc<Ref<Instruction>>),
|
||||
HandleMessage(u64),
|
||||
}
|
||||
|
||||
struct Forwarder {
|
||||
hop_limit: u64,
|
||||
supervisor: Arc<Ref<Instruction>>,
|
||||
peer: Option<Arc<Ref<Instruction>>>,
|
||||
}
|
||||
|
||||
impl Drop for Forwarder {
|
||||
fn drop(&mut self) {
|
||||
let r = self.peer.take();
|
||||
let _ = tokio::spawn(async move {
|
||||
drop(r);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
impl Entity<Instruction> for Forwarder {
|
||||
fn message(&mut self, turn: &mut Activation, message: Instruction) -> ActorResult {
|
||||
match message {
|
||||
Instruction::SetPeer(r) => {
|
||||
tracing::info!("Setting peer {:?}", r);
|
||||
self.peer = Some(r);
|
||||
}
|
||||
Instruction::HandleMessage(n) => {
|
||||
let target = if n >= self.hop_limit { &self.supervisor } else { self.peer.as_ref().expect("peer") };
|
||||
turn.message(target, Instruction::HandleMessage(n + 1));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
struct Supervisor {
|
||||
latency_mode: bool,
|
||||
total_transfers: u64,
|
||||
remaining_to_receive: u32,
|
||||
start_time: Option<std::time::Instant>,
|
||||
}
|
||||
|
||||
impl Entity<Instruction> for Supervisor {
|
||||
fn message(&mut self, turn: &mut Activation, message: Instruction) -> ActorResult {
|
||||
match message {
|
||||
Instruction::SetPeer(_) => {
|
||||
tracing::info!("Start");
|
||||
self.start_time = Some(std::time::Instant::now());
|
||||
},
|
||||
Instruction::HandleMessage(_n) => {
|
||||
self.remaining_to_receive -= 1;
|
||||
if self.remaining_to_receive == 0 {
|
||||
let stop_time = std::time::Instant::now();
|
||||
let duration = stop_time - self.start_time.unwrap();
|
||||
tracing::info!("Stop after {:?}; {:?} messages, so {:?} Hz ({} mode)",
|
||||
duration,
|
||||
self.total_transfers,
|
||||
(1000.0 * self.total_transfers as f64) / duration.as_millis() as f64,
|
||||
if self.latency_mode { "latency" } else { "throughput" });
|
||||
turn.stop_root();
|
||||
}
|
||||
},
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> ActorResult {
|
||||
syndicate::convenient_logging()?;
|
||||
Actor::top(None, |t| {
|
||||
let args: Vec<String> = env::args().collect();
|
||||
let n_actors: u32 = args.get(1).unwrap_or(&"1000000".to_string()).parse()?;
|
||||
let n_rounds: u32 = args.get(2).unwrap_or(&"200".to_string()).parse()?;
|
||||
let latency_mode: bool = match args.get(3).unwrap_or(&"throughput".to_string()).as_str() {
|
||||
"latency" => true,
|
||||
"throughput" => false,
|
||||
_other => return Err("Invalid throughput/latency mode".into()),
|
||||
};
|
||||
tracing::info!("Will run {:?} actors for {:?} rounds", n_actors, n_rounds);
|
||||
|
||||
let total_transfers: u64 = n_actors as u64 * n_rounds as u64;
|
||||
let (hop_limit, injection_count) = if latency_mode {
|
||||
(total_transfers, 1)
|
||||
} else {
|
||||
(n_rounds as u64, n_actors)
|
||||
};
|
||||
|
||||
let me = t.create(Supervisor {
|
||||
latency_mode,
|
||||
total_transfers,
|
||||
remaining_to_receive: injection_count,
|
||||
start_time: None,
|
||||
});
|
||||
|
||||
let mut forwarders: Vec<Arc<Ref<Instruction>>> = Vec::new();
|
||||
for _i in 0 .. n_actors {
|
||||
if _i % 10000 == 0 { tracing::info!("Actor {:?}", _i); }
|
||||
forwarders.push(
|
||||
t.spawn_for_entity(None, true, Box::new(
|
||||
Forwarder {
|
||||
hop_limit,
|
||||
supervisor: me.clone(),
|
||||
peer: forwarders.last().cloned(),
|
||||
}))
|
||||
.0.expect("an entity"));
|
||||
}
|
||||
t.message(&forwarders[0], Instruction::SetPeer(forwarders.last().expect("an entity").clone()));
|
||||
t.later(move |t| {
|
||||
t.message(&me, Instruction::SetPeer(me.clone()));
|
||||
t.later(move |t| {
|
||||
let mut injected: u32 = 0;
|
||||
for f in forwarders.into_iter() {
|
||||
if injected >= injection_count {
|
||||
break;
|
||||
}
|
||||
t.message(&f, Instruction::HandleMessage(0));
|
||||
injected += 1;
|
||||
}
|
||||
Ok(())
|
||||
});
|
||||
Ok(())
|
||||
});
|
||||
Ok(())
|
||||
}).await??;
|
||||
Ok(())
|
||||
}
|
|
@ -0,0 +1,175 @@
|
|||
use std::env;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::AtomicU64;
|
||||
use std::sync::atomic::Ordering;
|
||||
|
||||
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender};
|
||||
|
||||
type Ref<T> = UnboundedSender<Box<T>>;
|
||||
|
||||
#[derive(Debug)]
|
||||
enum Instruction {
|
||||
SetPeer(Arc<Ref<Instruction>>),
|
||||
HandleMessage(u64),
|
||||
}
|
||||
|
||||
struct Forwarder {
|
||||
hop_limit: u64,
|
||||
supervisor: Arc<Ref<Instruction>>,
|
||||
peer: Option<Arc<Ref<Instruction>>>,
|
||||
}
|
||||
|
||||
impl Drop for Forwarder {
|
||||
fn drop(&mut self) {
|
||||
let r = self.peer.take();
|
||||
let _ = tokio::spawn(async move {
|
||||
drop(r);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
enum Action { Continue, Stop }
|
||||
|
||||
trait Actor<T> {
|
||||
fn message(&mut self, message: T) -> Action;
|
||||
}
|
||||
|
||||
fn send<T: std::marker::Send + 'static>(ch: &Arc<Ref<T>>, message: T) -> () {
|
||||
match ch.send(Box::new(message)) {
|
||||
Ok(()) => (),
|
||||
Err(v) => panic!("Aiee! Could not send {:?}", v),
|
||||
}
|
||||
}
|
||||
|
||||
fn spawn<T: std::marker::Send + 'static, R: Actor<T> + std::marker::Send + 'static>(rt: Option<Arc<AtomicU64>>, mut ac: R) -> Arc<Ref<T>> {
|
||||
let (tx, mut rx) = unbounded_channel::<Box<T>>();
|
||||
if let Some(ref c) = rt {
|
||||
c.fetch_add(1, Ordering::SeqCst);
|
||||
}
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
match rx.recv().await {
|
||||
None => break,
|
||||
Some(message) => {
|
||||
match ac.message(*message) {
|
||||
Action::Continue => continue,
|
||||
Action::Stop => break,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Some(c) = rt {
|
||||
c.fetch_sub(1, Ordering::SeqCst);
|
||||
}
|
||||
});
|
||||
Arc::new(tx)
|
||||
}
|
||||
|
||||
impl Actor<Instruction> for Forwarder {
|
||||
fn message(&mut self, message: Instruction) -> Action {
|
||||
match message {
|
||||
Instruction::SetPeer(r) => {
|
||||
tracing::info!("Setting peer {:?}", r);
|
||||
self.peer = Some(r);
|
||||
}
|
||||
Instruction::HandleMessage(n) => {
|
||||
let target = if n >= self.hop_limit { &self.supervisor } else { self.peer.as_ref().expect("peer") };
|
||||
send(target, Instruction::HandleMessage(n + 1));
|
||||
}
|
||||
}
|
||||
Action::Continue
|
||||
}
|
||||
}
|
||||
|
||||
struct Supervisor {
|
||||
latency_mode: bool,
|
||||
total_transfers: u64,
|
||||
remaining_to_receive: u32,
|
||||
start_time: Option<std::time::Instant>,
|
||||
}
|
||||
|
||||
impl Actor<Instruction> for Supervisor {
|
||||
fn message(&mut self, message: Instruction) -> Action {
|
||||
match message {
|
||||
Instruction::SetPeer(_) => {
|
||||
tracing::info!("Start");
|
||||
self.start_time = Some(std::time::Instant::now());
|
||||
},
|
||||
Instruction::HandleMessage(_n) => {
|
||||
self.remaining_to_receive -= 1;
|
||||
if self.remaining_to_receive == 0 {
|
||||
let stop_time = std::time::Instant::now();
|
||||
let duration = stop_time - self.start_time.unwrap();
|
||||
tracing::info!("Stop after {:?}; {:?} messages, so {:?} Hz ({} mode)",
|
||||
duration,
|
||||
self.total_transfers,
|
||||
(1000.0 * self.total_transfers as f64) / duration.as_millis() as f64,
|
||||
if self.latency_mode { "latency" } else { "throughput" });
|
||||
return Action::Stop;
|
||||
}
|
||||
},
|
||||
}
|
||||
Action::Continue
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error + std::marker::Send + std::marker::Sync>> {
|
||||
syndicate::convenient_logging()?;
|
||||
|
||||
let args: Vec<String> = env::args().collect();
|
||||
let n_actors: u32 = args.get(1).unwrap_or(&"1000000".to_string()).parse()?;
|
||||
let n_rounds: u32 = args.get(2).unwrap_or(&"200".to_string()).parse()?;
|
||||
let latency_mode: bool = match args.get(3).unwrap_or(&"throughput".to_string()).as_str() {
|
||||
"latency" => true,
|
||||
"throughput" => false,
|
||||
_other => return Err("Invalid throughput/latency mode".into()),
|
||||
};
|
||||
tracing::info!("Will run {:?} actors for {:?} rounds", n_actors, n_rounds);
|
||||
|
||||
let count = Arc::new(AtomicU64::new(0));
|
||||
|
||||
let total_transfers: u64 = n_actors as u64 * n_rounds as u64;
|
||||
let (hop_limit, injection_count) = if latency_mode {
|
||||
(total_transfers, 1)
|
||||
} else {
|
||||
(n_rounds as u64, n_actors)
|
||||
};
|
||||
|
||||
let me = spawn(Some(count.clone()), Supervisor {
|
||||
latency_mode,
|
||||
total_transfers,
|
||||
remaining_to_receive: injection_count,
|
||||
start_time: None,
|
||||
});
|
||||
|
||||
let mut forwarders: Vec<Arc<Ref<Instruction>>> = Vec::new();
|
||||
for _i in 0 .. n_actors {
|
||||
if _i % 10000 == 0 { tracing::info!("Actor {:?}", _i); }
|
||||
forwarders.push(spawn(None, Forwarder {
|
||||
hop_limit,
|
||||
supervisor: me.clone(),
|
||||
peer: forwarders.last().cloned(),
|
||||
}));
|
||||
}
|
||||
send(&forwarders[0], Instruction::SetPeer(forwarders.last().expect("an entity").clone()));
|
||||
send(&me, Instruction::SetPeer(me.clone()));
|
||||
|
||||
let mut injected: u32 = 0;
|
||||
for f in forwarders.into_iter() {
|
||||
if injected >= injection_count {
|
||||
break;
|
||||
}
|
||||
send(&f, Instruction::HandleMessage(0));
|
||||
injected += 1;
|
||||
}
|
||||
|
||||
loop {
|
||||
if count.load(Ordering::SeqCst) == 0 {
|
||||
break;
|
||||
}
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
|
@ -27,6 +27,7 @@ use pat::lit;
|
|||
|
||||
enum SymbolVariant<'a> {
|
||||
Normal(&'a str),
|
||||
#[allow(dead_code)] // otherwise we get 'warning: field `0` is never read'
|
||||
Binder(&'a str),
|
||||
Substitution(&'a str),
|
||||
Discard,
|
||||
|
@ -35,7 +36,7 @@ enum SymbolVariant<'a> {
|
|||
fn compile_sequence_members(vs: &[IOValue]) -> Vec<TokenStream> {
|
||||
vs.iter().enumerate().map(|(i, f)| {
|
||||
let p = compile_pattern(f);
|
||||
quote!((#i .into(), #p))
|
||||
quote!((syndicate::value::Value::from(#i).wrap(), #p))
|
||||
}).collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
|
@ -79,10 +80,6 @@ impl ValueCompiler {
|
|||
match v.value() {
|
||||
Value::Boolean(b) =>
|
||||
quote!(#V_::Value::from(#b).wrap()),
|
||||
Value::Float(f) => {
|
||||
let f = f.0;
|
||||
quote!(#V_::Value::from(#f).wrap())
|
||||
}
|
||||
Value::Double(d) => {
|
||||
let d = d.0;
|
||||
quote!(#V_::Value::from(#d).wrap())
|
||||
|
@ -154,16 +151,14 @@ fn compile_pattern(v: &IOValue) -> TokenStream {
|
|||
#[allow(non_snake_case)]
|
||||
let V_: TokenStream = quote!(syndicate::value);
|
||||
#[allow(non_snake_case)]
|
||||
let MapFromIterator_: TokenStream = quote!(<#V_::Map<_, _> as std::iter::FromIterator<_>>::from_iter);
|
||||
let MapFrom_: TokenStream = quote!(<#V_::Map<_, _>>::from);
|
||||
|
||||
match v.value() {
|
||||
Value::Symbol(s) => match analyze_symbol(&s, true) {
|
||||
SymbolVariant::Binder(_) =>
|
||||
quote!(#P_::Pattern::DBind(Box::new(#P_::DBind {
|
||||
pattern: #P_::Pattern::DDiscard(Box::new(#P_::DDiscard))
|
||||
}))),
|
||||
quote!(#P_::Pattern::Bind{ pattern: Box::new(#P_::Pattern::Discard) }),
|
||||
SymbolVariant::Discard =>
|
||||
quote!(#P_::Pattern::DDiscard(Box::new(#P_::DDiscard))),
|
||||
quote!(#P_::Pattern::Discard),
|
||||
SymbolVariant::Substitution(s) =>
|
||||
lit(Ident::new(s, Span::call_site())),
|
||||
SymbolVariant::Normal(_) =>
|
||||
|
@ -175,9 +170,7 @@ fn compile_pattern(v: &IOValue) -> TokenStream {
|
|||
Some(label) =>
|
||||
if label.starts_with("$") && r.arity() == 1 {
|
||||
let nested = compile_pattern(&r.fields()[0]);
|
||||
quote!(#P_::Pattern::DBind(Box::new(#P_::DBind {
|
||||
pattern: #nested
|
||||
})))
|
||||
quote!(#P_::Pattern::Bind{ pattern: Box::new(#nested) })
|
||||
} else {
|
||||
let label_stx = if label.starts_with("=") {
|
||||
let id = Ident::new(&label[1..], Span::call_site());
|
||||
|
@ -186,18 +179,19 @@ fn compile_pattern(v: &IOValue) -> TokenStream {
|
|||
quote!(#V_::Value::symbol(#label).wrap())
|
||||
};
|
||||
let members = compile_sequence_members(r.fields());
|
||||
quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Rec {
|
||||
label: #label_stx,
|
||||
fields: vec![#(#members),*],
|
||||
})))
|
||||
quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Rec { label: #label_stx }),
|
||||
entries: #MapFrom_([#(#members),*]),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
Value::Sequence(vs) => {
|
||||
let members = compile_sequence_members(vs);
|
||||
quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Arr {
|
||||
items: vec![#(#members),*],
|
||||
})))
|
||||
quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Arr),
|
||||
entries: #MapFrom_([#(#members),*]),
|
||||
})
|
||||
}
|
||||
Value::Set(_) =>
|
||||
panic!("Cannot match sets in patterns"),
|
||||
|
@ -207,9 +201,10 @@ fn compile_pattern(v: &IOValue) -> TokenStream {
|
|||
let v = compile_pattern(v);
|
||||
quote!((#k, #v))
|
||||
}).collect::<Vec<_>>();
|
||||
quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Dict {
|
||||
entries: #MapFromIterator_(vec![#(#members),*])
|
||||
})))
|
||||
quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Dict),
|
||||
entries: #MapFrom_([#(#members),*]),
|
||||
})
|
||||
}
|
||||
_ => lit(ValueCompiler::for_patterns().compile(v)),
|
||||
}
|
||||
|
|
|
@ -15,10 +15,9 @@ pub fn lit<T: ToTokens>(e: T) -> TokenStream2 {
|
|||
}
|
||||
|
||||
fn compile_sequence_members(stxs: &Vec<Stx>) -> Result<Vec<TokenStream2>, &'static str> {
|
||||
stxs.iter().map(|stx| {
|
||||
// let p = to_pattern_expr(stx)?;
|
||||
// Ok(quote!(#p))
|
||||
to_pattern_expr(stx)
|
||||
stxs.iter().enumerate().map(|(i, stx)| {
|
||||
let p = to_pattern_expr(stx)?;
|
||||
Ok(quote!((syndicate::value::Value::from(#i).wrap(), #p)))
|
||||
}).collect()
|
||||
}
|
||||
|
||||
|
@ -28,7 +27,7 @@ pub fn to_pattern_expr(stx: &Stx) -> Result<TokenStream2, &'static str> {
|
|||
#[allow(non_snake_case)]
|
||||
let V_: TokenStream2 = quote!(syndicate::value);
|
||||
#[allow(non_snake_case)]
|
||||
let MapFromIterator_: TokenStream2 = quote!(<#V_::Map<_, _> as std::iter::FromIterator<_>>::from_iter);
|
||||
let MapFrom_: TokenStream2 = quote!(<#V_::Map<_, _>>::from);
|
||||
|
||||
match stx {
|
||||
Stx::Atom(v) =>
|
||||
|
@ -41,26 +40,27 @@ pub fn to_pattern_expr(stx: &Stx) -> Result<TokenStream2, &'static str> {
|
|||
None => to_pattern_expr(&Stx::Discard)?,
|
||||
}
|
||||
};
|
||||
Ok(quote!(#P_::Pattern::DBind(Box::new(#P_::DBind { pattern: #inner_pat_expr }))))
|
||||
Ok(quote!(#P_::Pattern::Bind { pattern: Box::new(#inner_pat_expr) }))
|
||||
}
|
||||
Stx::Subst(e) =>
|
||||
Ok(lit(e)),
|
||||
Stx::Discard =>
|
||||
Ok(quote!(#P_::Pattern::DDiscard(Box::new(#P_::DDiscard)))),
|
||||
Ok(quote!(#P_::Pattern::Discard)),
|
||||
|
||||
Stx::Rec(l, fs) => {
|
||||
let label = to_value_expr(&*l)?;
|
||||
let members = compile_sequence_members(fs)?;
|
||||
Ok(quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Rec {
|
||||
label: #label,
|
||||
fields: vec![#(#members),*],
|
||||
}))))
|
||||
Ok(quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Rec { label: #label }),
|
||||
entries: #MapFrom_([#(#members),*]),
|
||||
}))
|
||||
},
|
||||
Stx::Seq(stxs) => {
|
||||
let members = compile_sequence_members(stxs)?;
|
||||
Ok(quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Arr {
|
||||
items: vec![#(#members),*],
|
||||
}))))
|
||||
Ok(quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Arr),
|
||||
entries: #MapFrom_([#(#members),*]),
|
||||
}))
|
||||
}
|
||||
Stx::Set(_stxs) =>
|
||||
Err("Set literals not supported in patterns"),
|
||||
|
@ -70,9 +70,10 @@ pub fn to_pattern_expr(stx: &Stx) -> Result<TokenStream2, &'static str> {
|
|||
let v = to_pattern_expr(v)?;
|
||||
Ok(quote!((#k, #v)))
|
||||
}).collect::<Result<Vec<_>, &'static str>>()?;
|
||||
Ok(quote!(#P_::Pattern::DCompound(Box::new(#P_::DCompound::Dict {
|
||||
entries: #MapFromIterator_(vec![#(#members),*])
|
||||
}))))
|
||||
Ok(quote!(#P_::Pattern::Group {
|
||||
type_: Box::new(#P_::GroupType::Dict),
|
||||
entries: #MapFrom_([#(#members),*])
|
||||
}))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
use proc_macro2::Delimiter;
|
||||
use proc_macro2::LineColumn;
|
||||
use proc_macro2::Span;
|
||||
use proc_macro2::TokenStream;
|
||||
|
||||
use syn::ExprLit;
|
||||
|
@ -14,7 +15,6 @@ use syn::parse::Parser;
|
|||
use syn::parse::ParseStream;
|
||||
use syn::parse_str;
|
||||
|
||||
use syndicate::value::Float;
|
||||
use syndicate::value::Double;
|
||||
use syndicate::value::IOValue;
|
||||
use syndicate::value::NestedValue;
|
||||
|
@ -70,24 +70,41 @@ fn punct_char(c: Cursor) -> Option<(char, Cursor)> {
|
|||
c.punct().map(|(p, c)| (p.as_char(), c))
|
||||
}
|
||||
|
||||
fn start_pos(s: Span) -> LineColumn {
|
||||
// We would like to write
|
||||
// s.start()
|
||||
// here, but until [1] is fixed (perhaps via [2]), we have to go the unsafe route
|
||||
// and assume we are in procedural macro context.
|
||||
// [1]: https://github.com/dtolnay/proc-macro2/issues/402
|
||||
// [2]: https://github.com/dtolnay/proc-macro2/pull/407
|
||||
let u = s.unwrap().start();
|
||||
LineColumn { column: u.column(), line: u.line() }
|
||||
}
|
||||
|
||||
fn end_pos(s: Span) -> LineColumn {
|
||||
// See start_pos
|
||||
let u = s.unwrap().end();
|
||||
LineColumn { column: u.column(), line: u.line() }
|
||||
}
|
||||
|
||||
fn parse_id(mut c: Cursor) -> Result<(String, Cursor)> {
|
||||
let mut id = String::new();
|
||||
let mut prev_pos = c.span().start();
|
||||
let mut prev_pos = start_pos(c.span());
|
||||
loop {
|
||||
if c.eof() || c.span().start() != prev_pos {
|
||||
if c.eof() || start_pos(c.span()) != prev_pos {
|
||||
return Ok((id, c));
|
||||
} else if let Some((p, next)) = c.punct() {
|
||||
match p.as_char() {
|
||||
'<' | '>' | '(' | ')' | '{' | '}' | '[' | ']' | ',' | ':' => return Ok((id, c)),
|
||||
ch => {
|
||||
id.push(ch);
|
||||
prev_pos = c.span().end();
|
||||
prev_pos = end_pos(c.span());
|
||||
c = next;
|
||||
}
|
||||
}
|
||||
} else if let Some((i, next)) = c.ident() {
|
||||
id.push_str(&i.to_string());
|
||||
prev_pos = i.span().end();
|
||||
prev_pos = end_pos(i.span());
|
||||
c = next;
|
||||
} else {
|
||||
return Ok((id, c));
|
||||
|
@ -153,7 +170,7 @@ fn parse_kv(c: Cursor) -> Result<((Stx, Stx), Cursor)> {
|
|||
}
|
||||
|
||||
fn adjacent_ident(pos: LineColumn, c: Cursor) -> (Option<Ident>, Cursor) {
|
||||
if c.span().start() != pos {
|
||||
if start_pos(c.span()) != pos {
|
||||
(None, c)
|
||||
} else if let Some((id, next)) = c.ident() {
|
||||
(Some(id), next)
|
||||
|
@ -177,8 +194,8 @@ fn parse_generic<T: Parse>(mut c: Cursor) -> Option<(T, Cursor)> {
|
|||
// OK, because parse2 checks for end-of-stream, let's chop
|
||||
// the input at the position of the error and try again (!).
|
||||
let mut collected = Vec::new();
|
||||
let upto = e.span().start();
|
||||
while !c.eof() && c.span().start() != upto {
|
||||
let upto = start_pos(e.span());
|
||||
while !c.eof() && start_pos(c.span()) != upto {
|
||||
let (tt, next) = c.token_tree().unwrap();
|
||||
collected.push(tt);
|
||||
c = next;
|
||||
|
@ -200,7 +217,7 @@ fn parse1(c: Cursor) -> Result<(Stx, Cursor)> {
|
|||
Ok((Stx::Rec(Box::new(q.remove(0)), q), c))
|
||||
}),
|
||||
'$' => {
|
||||
let (maybe_id, next) = adjacent_ident(p.span().end(), next);
|
||||
let (maybe_id, next) = adjacent_ident(end_pos(p.span()), next);
|
||||
let (maybe_type, next) = if let Some((':', next)) = punct_char(next) {
|
||||
match parse_generic::<Type>(next) {
|
||||
Some((t, next)) => (Some(t), next),
|
||||
|
@ -248,7 +265,7 @@ fn parse1(c: Cursor) -> Result<(Stx, Cursor)> {
|
|||
IOValue::new(i.base10_parse::<i128>()?)
|
||||
}
|
||||
Lit::Float(f) => if f.suffix() == "f32" {
|
||||
IOValue::new(&Float(f.base10_parse::<f32>()?))
|
||||
IOValue::new(&Double(f.base10_parse::<f32>()? as f64))
|
||||
} else {
|
||||
IOValue::new(&Double(f.base10_parse::<f64>()?))
|
||||
}
|
||||
|
|
|
@ -50,10 +50,6 @@ pub fn value_to_value_expr(v: &IOValue) -> TokenStream2 {
|
|||
match v.value() {
|
||||
Value::Boolean(b) =>
|
||||
quote!(#V_::Value::from(#b).wrap()),
|
||||
Value::Float(f) => {
|
||||
let f = f.0;
|
||||
quote!(#V_::Value::from(#f).wrap())
|
||||
}
|
||||
Value::Double(d) => {
|
||||
let d = d.0;
|
||||
quote!(#V_::Value::from(#d).wrap())
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"folders": [
|
||||
{
|
||||
"path": "."
|
||||
},
|
||||
{
|
||||
"path": "../syndicate-protocols"
|
||||
}
|
||||
],
|
||||
"settings": {
|
||||
"files.exclude": {
|
||||
"target": true
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
[package]
|
||||
name = "syndicate-schema-plugin"
|
||||
version = "0.10.1"
|
||||
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
|
||||
edition = "2018"
|
||||
|
||||
description = "Support for using Preserves Schema with Syndicate macros."
|
||||
homepage = "https://syndicate-lang.org/"
|
||||
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
|
||||
license = "Apache-2.0"
|
||||
|
||||
[lib]
|
||||
|
||||
[build-dependencies]
|
||||
preserves-schema = "5.995"
|
||||
|
||||
[dependencies]
|
||||
preserves = "4.995"
|
||||
preserves-schema = "5.995"
|
||||
lazy_static = "1.4"
|
||||
|
||||
[package.metadata.workspaces]
|
||||
independent = true
|
|
@ -0,0 +1,15 @@
|
|||
use preserves_schema::compiler::*;
|
||||
|
||||
fn main() -> std::io::Result<()> {
|
||||
let buildroot = std::path::PathBuf::from(std::env::var_os("OUT_DIR").unwrap());
|
||||
|
||||
let mut gen_dir = buildroot.clone();
|
||||
gen_dir.push("src/schemas");
|
||||
|
||||
let mut c = CompilerConfig::new("crate::schemas".to_owned());
|
||||
c.add_external_module(ExternalModule::new(vec!["EntityRef".to_owned()], "crate::placeholder"));
|
||||
|
||||
let inputs = expand_inputs(&vec!["./schema-bundle.bin".to_owned()])?;
|
||||
c.load_schemas_and_bundles(&inputs, &vec![])?;
|
||||
compile(&c, &mut CodeCollector::files(gen_dir))
|
||||
}
|
|
@ -0,0 +1,44 @@
|
|||
´³bundle·µ³rpc„´³schema·³version°³definitions·³Answer´³rec´³lit³a„´³tupleµ´³named³request³any„´³named³response³any„„„„³Result´³orµµ±ok´³rec´³lit³ok„´³tupleµ´³named³value³any„„„„„µ±error´³rec´³lit³error„´³tupleµ´³named³error³any„„„„„„„³Question´³rec´³lit³q„´³tupleµ´³named³request³any„„„„„³embeddedType´³refµ³ EntityRef„³Cap„„„µ³tcp„´³schema·³version°³definitions·³TcpLocal´³rec´³lit³ tcp-local„´³tupleµ´³named³host´³atom³String„„´³named³port´³atom³
SignedInteger„„„„„³ TcpRemote´³rec´³lit³
|
||||
tcp-remote„´³tupleµ´³named³host´³atom³String„„´³named³port´³atom³
SignedInteger„„„„„³TcpPeerInfo´³rec´³lit³tcp-peer„´³tupleµ´³named³handle´³embedded³any„„´³named³local´³refµ„³TcpLocal„„´³named³remote´³refµ„³ TcpRemote„„„„„„³embeddedType´³refµ³ EntityRef„³Cap„„„µ³http„´³schema·³version°³definitions·³Chunk´³orµµ±string´³atom³String„„µ±bytes´³atom³
|
||||
ByteString„„„„³Headers´³dictof´³atom³Symbol„´³atom³String„„³MimeType´³atom³Symbol„³
|
||||
QueryValue´³orµµ±string´³atom³String„„µ±file´³rec´³lit³file„´³tupleµ´³named³filename´³atom³String„„´³named³headers´³refµ„³Headers„„´³named³body´³atom³
|
||||
ByteString„„„„„„„„³HostPattern´³orµµ±host´³atom³String„„µ±any´³lit€„„„„³HttpBinding´³rec´³lit³ http-bind„´³tupleµ´³named³host´³refµ„³HostPattern„„´³named³port´³atom³
SignedInteger„„´³named³method´³refµ„³
MethodPattern„„´³named³path´³refµ„³PathPattern„„´³named³handler´³embedded´³refµ„³HttpRequest„„„„„„³HttpContext´³rec´³lit³request„´³tupleµ´³named³req´³refµ„³HttpRequest„„´³named³res´³embedded´³refµ„³HttpResponse„„„„„„³HttpRequest´³rec´³lit³http-request„´³tupleµ´³named³sequenceNumber´³atom³
SignedInteger„„´³named³host´³refµ„³RequestHost„„´³named³port´³atom³
SignedInteger„„´³named³method´³atom³Symbol„„´³named³path´³seqof´³atom³String„„„´³named³headers´³refµ„³Headers„„´³named³query´³dictof´³atom³Symbol„´³seqof´³refµ„³
|
||||
QueryValue„„„„´³named³body´³refµ„³RequestBody„„„„„³HttpService´³rec´³lit³http-service„´³tupleµ´³named³host´³refµ„³HostPattern„„´³named³port´³atom³
SignedInteger„„´³named³method´³refµ„³
MethodPattern„„´³named³path´³refµ„³PathPattern„„„„„³PathPattern´³seqof´³refµ„³PathPatternElement„„³RequestBody´³orµµ±absent´³lit€„„µ±present´³atom³
|
||||
ByteString„„„„³RequestHost´³orµµ±absent´³lit€„„µ±present´³atom³String„„„„³HttpListener´³rec´³lit³
http-listener„´³tupleµ´³named³port´³atom³
SignedInteger„„„„„³HttpResponse´³orµµ±status´³rec´³lit³status„´³tupleµ´³named³code´³atom³
SignedInteger„„´³named³message´³atom³String„„„„„„µ±header´³rec´³lit³header„´³tupleµ´³named³name´³atom³Symbol„„´³named³value´³atom³String„„„„„„µ±chunk´³rec´³lit³chunk„´³tupleµ´³named³chunk´³refµ„³Chunk„„„„„„µ±done´³rec´³lit³done„´³tupleµ´³named³chunk´³refµ„³Chunk„„„„„„„„³
MethodPattern´³orµµ±any´³lit€„„µ±specific´³atom³Symbol„„„„³PathPatternElement´³orµµ±label´³atom³String„„µ±wildcard´³lit³_„„µ±rest´³lit³...„„„„„³embeddedType€„„µ³noise„´³schema·³version°³definitions·³Packet´³orµµ±complete´³atom³
|
||||
ByteString„„µ±
|
||||
fragmented´³seqof´³atom³
|
||||
ByteString„„„„„³ Initiator´³rec´³lit³ initiator„´³tupleµ´³named³initiatorSession´³embedded´³refµ„³Packet„„„„„„³ NoiseSpec´³andµ´³dict·³key´³named³key´³atom³
|
||||
ByteString„„³service´³named³service´³refµ„³ServiceSelector„„„„´³named³protocol´³refµ„³
NoiseProtocol„„´³named³
preSharedKeys´³refµ„³NoisePreSharedKeys„„„„³SessionItem´³orµµ± Initiator´³refµ„³ Initiator„„µ±Packet´³refµ„³Packet„„„„³
NoiseProtocol´³orµµ±present´³dict·³protocol´³named³protocol´³atom³String„„„„„µ±invalid´³dict·³protocol´³named³protocol³any„„„„µ±absent´³dict·„„„„„³
NoiseStepType´³lit³noise„³SecretKeyField´³orµµ±present´³dict·³ secretKey´³named³ secretKey´³atom³
|
||||
ByteString„„„„„µ±invalid´³dict·³ secretKey´³named³ secretKey³any„„„„µ±absent´³dict·„„„„„³DefaultProtocol´³lit±!Noise_NK_25519_ChaChaPoly_BLAKE2s„³NoiseStepDetail´³refµ„³ServiceSelector„³ServiceSelector³any³NoiseServiceSpec´³andµ´³named³base´³refµ„³ NoiseSpec„„´³named³ secretKey´³refµ„³SecretKeyField„„„„³NoisePreSharedKeys´³orµµ±present´³dict·³
preSharedKeys´³named³
preSharedKeys´³seqof´³atom³
|
||||
ByteString„„„„„„µ±invalid´³dict·³
preSharedKeys´³named³
preSharedKeys³any„„„„µ±absent´³dict·„„„„„³NoisePathStepDetail´³refµ„³ NoiseSpec„³NoiseDescriptionDetail´³refµ„³NoiseServiceSpec„„³embeddedType´³refµ³ EntityRef„³Cap„„„µ³timer„´³schema·³version°³definitions·³SetTimer´³rec´³lit³ set-timer„´³tupleµ´³named³label³any„´³named³seconds´³atom³Double„„´³named³kind´³refµ„³ TimerKind„„„„„³ LaterThan´³rec´³lit³
|
||||
later-than„´³tupleµ´³named³seconds´³atom³Double„„„„„³ TimerKind´³orµµ±relative´³lit³relative„„µ±absolute´³lit³absolute„„µ±clear´³lit³clear„„„„³TimerExpired´³rec´³lit³
timer-expired„´³tupleµ´³named³label³any„´³named³seconds´³atom³Double„„„„„„³embeddedType€„„µ³trace„´³schema·³version°³definitions·³Oid³any³Name´³orµµ± anonymous´³rec´³lit³ anonymous„´³tupleµ„„„„µ±named´³rec´³lit³named„´³tupleµ´³named³name³any„„„„„„„³Target´³rec´³lit³entity„´³tupleµ´³named³actor´³refµ„³ActorId„„´³named³facet´³refµ„³FacetId„„´³named³oid´³refµ„³Oid„„„„„³TaskId³any³TurnId³any³ActorId³any³FacetId³any³ TurnCause´³orµµ±turn´³rec´³lit³ caused-by„´³tupleµ´³named³id´³refµ„³TurnId„„„„„„µ±cleanup´³rec´³lit³cleanup„´³tupleµ„„„„µ±linkedTaskRelease´³rec´³lit³linked-task-release„´³tupleµ´³named³id´³refµ„³TaskId„„´³named³reason´³refµ„³LinkedTaskReleaseReason„„„„„„µ±periodicActivation´³rec´³lit³periodic-activation„´³tupleµ´³named³period´³atom³Double„„„„„„µ±delay´³rec´³lit³delay„´³tupleµ´³named³causingTurn´³refµ„³TurnId„„´³named³amount´³atom³Double„„„„„„µ±external´³rec´³lit³external„´³tupleµ´³named³description³any„„„„„„„³ TurnEvent´³orµµ±assert´³rec´³lit³assert„´³tupleµ´³named³ assertion´³refµ„³AssertionDescription„„´³named³handle´³refµ³protocol„³Handle„„„„„„µ±retract´³rec´³lit³retract„´³tupleµ´³named³handle´³refµ³protocol„³Handle„„„„„„µ±message´³rec´³lit³message„´³tupleµ´³named³body´³refµ„³AssertionDescription„„„„„„µ±sync´³rec´³lit³sync„´³tupleµ´³named³peer´³refµ„³Target„„„„„„µ± breakLink´³rec´³lit³
|
||||
break-link„´³tupleµ´³named³source´³refµ„³ActorId„„´³named³handle´³refµ³protocol„³Handle„„„„„„„„³
|
||||
ExitStatus´³orµµ±ok´³lit³ok„„µ±Error´³refµ³protocol„³Error„„„„³
|
||||
TraceEntry´³rec´³lit³trace„´³tupleµ´³named³ timestamp´³atom³Double„„´³named³actor´³refµ„³ActorId„„´³named³item´³refµ„³ActorActivation„„„„„³ActorActivation´³orµµ±start´³rec´³lit³start„´³tupleµ´³named³ actorName´³refµ„³Name„„„„„„µ±turn´³refµ„³TurnDescription„„µ±stop´³rec´³lit³stop„´³tupleµ´³named³status´³refµ„³
|
||||
ExitStatus„„„„„„„„³FacetStopReason´³orµµ±explicitAction´³lit³explicit-action„„µ±inert´³lit³inert„„µ±parentStopping´³lit³parent-stopping„„µ±
actorStopping´³lit³actor-stopping„„„„³TurnDescription´³rec´³lit³turn„´³tupleµ´³named³id´³refµ„³TurnId„„´³named³cause´³refµ„³ TurnCause„„´³named³actions´³seqof´³refµ„³ActionDescription„„„„„„³ActionDescription´³orµµ±dequeue´³rec´³lit³dequeue„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±enqueue´³rec´³lit³enqueue„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±dequeueInternal´³rec´³lit³dequeue-internal„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±enqueueInternal´³rec´³lit³enqueue-internal„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±spawn´³rec´³lit³spawn„´³tupleµ´³named³link´³atom³Boolean„„´³named³id´³refµ„³ActorId„„„„„„µ±link´³rec´³lit³link„´³tupleµ´³named³parentActor´³refµ„³ActorId„„´³named³
childToParent´³refµ³protocol„³Handle„„´³named³
|
||||
childActor´³refµ„³ActorId„„´³named³
parentToChild´³refµ³protocol„³Handle„„„„„„µ±
|
||||
facetStart´³rec´³lit³facet-start„´³tupleµ´³named³path´³seqof´³refµ„³FacetId„„„„„„„µ± facetStop´³rec´³lit³
|
||||
facet-stop„´³tupleµ´³named³path´³seqof´³refµ„³FacetId„„„´³named³reason´³refµ„³FacetStopReason„„„„„„µ±linkedTaskStart´³rec´³lit³linked-task-start„´³tupleµ´³named³taskName´³refµ„³Name„„´³named³id´³refµ„³TaskId„„„„„„„„³TargetedTurnEvent´³rec´³lit³event„´³tupleµ´³named³target´³refµ„³Target„„´³named³detail´³refµ„³ TurnEvent„„„„„³AssertionDescription´³orµµ±value´³rec´³lit³value„´³tupleµ´³named³value³any„„„„„µ±opaque´³rec´³lit³opaque„´³tupleµ´³named³description³any„„„„„„„³LinkedTaskReleaseReason´³orµµ± cancelled´³lit³ cancelled„„µ±normal´³lit³normal„„„„„³embeddedType´³refµ³ EntityRef„³Cap„„„µ³stdenv„´³schema·³version°³definitions·³
StandardRoute´³orµµ±standard´³tuplePrefixµ´³named³
|
||||
transports´³seqof´³refµ„³StandardTransport„„„´³named³key´³atom³
|
||||
ByteString„„´³named³service³any„´³named³sig´³atom³
|
||||
ByteString„„´³named³oid³any„„´³named³caveats´³seqof´³refµ³sturdy„³Caveat„„„„„µ±general´³refµ³
|
||||
gatekeeper„³Route„„„„³StandardTransport´³orµµ±wsUrl´³atom³String„„µ±other³any„„„„³embeddedType€„„µ³stream„´³schema·³version°³definitions·³Mode´³orµµ±bytes´³lit³bytes„„µ±lines´³refµ„³LineMode„„µ±packet´³rec´³lit³packet„´³tupleµ´³named³size´³atom³
SignedInteger„„„„„„µ±object´³rec´³lit³object„´³tupleµ´³named³description³any„„„„„„„³Sink´³orµµ±source´³rec´³lit³source„´³tupleµ´³named³
|
||||
controller´³embedded´³refµ„³Source„„„„„„„µ±StreamError´³refµ„³StreamError„„µ±data´³rec´³lit³data„´³tupleµ´³named³payload³any„´³named³mode´³refµ„³Mode„„„„„„µ±eof´³rec´³lit³eof„´³tupleµ„„„„„„³Source´³orµµ±sink´³rec´³lit³sink„´³tupleµ´³named³
|
||||
controller´³embedded´³refµ„³Sink„„„„„„„µ±StreamError´³refµ„³StreamError„„µ±credit´³rec´³lit³credit„´³tupleµ´³named³amount´³refµ„³CreditAmount„„´³named³mode´³refµ„³Mode„„„„„„„„³LineMode´³orµµ±lf´³lit³lf„„µ±crlf´³lit³crlf„„„„³StreamError´³rec´³lit³error„´³tupleµ´³named³message´³atom³String„„„„„³CreditAmount´³orµµ±count´³atom³
SignedInteger„„µ± unbounded´³lit³ unbounded„„„„³StreamConnection´³rec´³lit³stream-connection„´³tupleµ´³named³source´³embedded´³refµ„³Source„„„´³named³sink´³embedded´³refµ„³Sink„„„´³named³spec³any„„„„³StreamListenerError´³rec´³lit³stream-listener-error„´³tupleµ´³named³spec³any„´³named³message´³atom³String„„„„„³StreamListenerReady´³rec´³lit³stream-listener-ready„´³tupleµ´³named³spec³any„„„„„³embeddedType´³refµ³ EntityRef„³Cap„„„µ³sturdy„´³schema·³version°³definitions·³Lit´³rec´³lit³lit„´³tupleµ´³named³value³any„„„„³Oid´³atom³
SignedInteger„³Alts´³rec´³lit³or„´³tupleµ´³named³alternatives´³seqof´³refµ„³Rewrite„„„„„„³PAnd´³rec´³lit³and„´³tupleµ´³named³patterns´³seqof´³refµ„³Pattern„„„„„„³PNot´³rec´³lit³not„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³TRef´³rec´³lit³ref„´³tupleµ´³named³binding´³atom³
SignedInteger„„„„„³PAtom´³orµµ±Boolean´³lit³Boolean„„µ±Double´³lit³Double„„µ±
SignedInteger´³lit³
SignedInteger„„µ±String´³lit³String„„µ±
|
||||
ByteString´³lit³
|
||||
ByteString„„µ±Symbol´³lit³Symbol„„„„³PBind´³rec´³lit³bind„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³Caveat´³orµµ±Rewrite´³refµ„³Rewrite„„µ±Alts´³refµ„³Alts„„µ±Reject´³refµ„³Reject„„µ±unknown³any„„„³Reject´³rec´³lit³reject„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³Pattern´³orµµ±PDiscard´³refµ„³PDiscard„„µ±PAtom´³refµ„³PAtom„„µ± PEmbedded´³refµ„³ PEmbedded„„µ±PBind´³refµ„³PBind„„µ±PAnd´³refµ„³PAnd„„µ±PNot´³refµ„³PNot„„µ±Lit´³refµ„³Lit„„µ± PCompound´³refµ„³ PCompound„„„„³Rewrite´³rec´³lit³rewrite„´³tupleµ´³named³pattern´³refµ„³Pattern„„´³named³template´³refµ„³Template„„„„„³WireRef´³orµµ±mine´³tupleµ´³lit° |