Compare commits

...

621 Commits

Author SHA1 Message Date
Tony Garnock-Jones 64a4074273 rustup-and-install.sh 2024-05-28 09:37:50 +02:00
Tony Garnock-Jones 53d859e50f Bump deps 2024-05-28 09:37:46 +02:00
Tony Garnock-Jones d301c09b02 Release independent packages
syndicate@0.40.1

Generated by cargo-workspaces
2024-05-24 09:32:28 +02:00
Tony Garnock-Jones 2bff59c41a Bump deps 2024-05-24 09:30:42 +02:00
Tony Garnock-Jones 39f0e8cdf1 Handle Packet::Nop 2024-05-19 21:54:48 +02:00
Tony Garnock-Jones 3e0d6af497 Merge latest changes from the syndicate-protocols repository 2024-05-19 21:51:31 +02:00
Tony Garnock-Jones 599b4ed469 Packet::Nop 2024-05-19 21:32:44 +02:00
Tony Garnock-Jones 6e555c9fd5 Update binary schemas 2024-04-19 12:57:14 +02:00
Emery Hemingway 8ebde104ca http: order absent fields first
This makes the absent variants the default initialization for
some implementations.
2024-04-19 10:51:40 +02:00
Tony Garnock-Jones 6468e16790 Bump preserves dep 2024-04-12 19:57:23 +02:00
Tony Garnock-Jones 65101e900e Release independent packages
syndicate@0.40.0
syndicate-macros@0.32.0
syndicate-schema-plugin@0.9.0
syndicate-server@0.45.0
syndicate-tools@0.18.0

Generated by cargo-workspaces
2024-04-10 17:04:25 +02:00
Tony Garnock-Jones 581886835a New dataspace pattern implementation; update HTTP server 2024-04-10 17:03:09 +02:00
Tony Garnock-Jones dcb1aec142 Merge latest changes from the syndicate-protocols repository 2024-04-10 15:43:06 +02:00
Tony Garnock-Jones c0239cf322 And with that we are almost back where we started with http.prs! 2024-04-10 15:16:35 +02:00
Tony Garnock-Jones 9cc4175f24 Cope with HTTP/1.0's optional Host header 2024-04-10 14:54:19 +02:00
Tony Garnock-Jones 70f42dd931 Another revision of http.prs 2024-04-10 14:31:27 +02:00
Tony Garnock-Jones ef1ebe6412 Sigh. <done> turns out to be a good idea in addition to <processing> 2024-04-10 13:24:25 +02:00
Tony Garnock-Jones deec008c66 No taskset on osx 2024-04-10 11:07:22 +02:00
Tony Garnock-Jones 008671d0b2 Bump deps incl preserves-schema for a keyword-avoiding fix 2024-04-09 22:41:58 +02:00
Tony Garnock-Jones 9fcf22e1b5 Merge latest changes from the syndicate-protocols repository 2024-04-09 15:16:46 +02:00
Tony Garnock-Jones ca18ca08df Alternative representation of dataspacePatterns 2024-04-09 09:15:21 +02:00
Tony Garnock-Jones 40ca168eac Repair typo 2024-04-09 09:13:51 +02:00
Tony Garnock-Jones 5a73e8d4c3 Alter dataspacePatterns language to make rec and arr more like dict 2024-04-04 16:31:09 +02:00
Tony Garnock-Jones 91b26001d8 There isn't an /etc/mime.types on OSX 2024-04-03 22:32:54 +02:00
Tony Garnock-Jones b83b39515d Release independent packages
syndicate@0.39.0
syndicate-macros@0.31.0
syndicate-schema-plugin@0.8.0
syndicate-server@0.44.0
syndicate-tools@0.17.0

Generated by cargo-workspaces
2024-04-01 16:53:42 +02:00
Tony Garnock-Jones d9fa6362af Merge latest changes from the syndicate-protocols repository 2024-04-01 16:52:57 +02:00
Tony Garnock-Jones 94598a574b Update HTTP service protocol 2024-04-01 16:52:24 +02:00
Tony Garnock-Jones 80ad0914ed Revise http protocol 2024-04-01 16:52:24 +02:00
Tony Garnock-Jones bdb0cc1023 Repair severe error in turn rollback 2024-04-01 16:52:24 +02:00
Tony Garnock-Jones 710ff91a64 Revise http protocol 2024-04-01 15:56:07 +02:00
Tony Garnock-Jones d3748a286b Release independent packages
syndicate-server@0.43.1

Generated by cargo-workspaces
2024-04-01 15:08:11 +02:00
Tony Garnock-Jones a56aec2c30 Tweak tracing in http_router 2024-04-01 15:01:33 +02:00
Tony Garnock-Jones 0c06ae9601 Repair path matching where no explicit PathPatternElement::Rest is present 2024-04-01 14:58:55 +02:00
Tony Garnock-Jones 1f0c9d2883 Dep bump 2024-03-30 11:36:42 +01:00
Tony Garnock-Jones 615830f799 Release independent packages
syndicate@0.38.0

Generated by cargo-workspaces
2024-03-30 11:02:01 +01:00
Tony Garnock-Jones 3c44768a72 Convenience syndicate::relay::stdio_service 2024-03-30 11:00:22 +01:00
Tony Garnock-Jones 04bb8c2f23 Release independent packages
syndicate@0.37.1

Generated by cargo-workspaces
2024-03-29 10:23:40 +01:00
Tony Garnock-Jones 9084c1781e Repair nested-panic situation 2024-03-29 10:23:21 +01:00
Tony Garnock-Jones 8a817fcb4f Release independent packages
syndicate@0.37.0
syndicate-macros@0.30.0
syndicate-schema-plugin@0.7.0
syndicate-server@0.43.0
syndicate-tools@0.16.0

Generated by cargo-workspaces
2024-03-28 16:33:56 +01:00
Tony Garnock-Jones 2ed2b38edc Repair noise session introduction 2024-03-28 16:32:46 +01:00
Tony Garnock-Jones 5090625f47 Bump deps 2024-03-28 15:50:36 +01:00
Tony Garnock-Jones a7ede65bad Merge latest changes from the syndicate-protocols repository 2024-03-28 15:50:12 +01:00
Tony Garnock-Jones c59e044695 Set embeddedType for noise 2024-03-28 15:49:48 +01:00
Tony Garnock-Jones ef98217a3a Merge latest changes from the syndicate-protocols repository 2024-03-28 15:17:37 +01:00
Tony Garnock-Jones bf0d47f1b7 Repair noise protocol 2024-03-28 15:17:28 +01:00
Tony Garnock-Jones fef41f39eb Release independent packages
syndicate@0.36.1

Generated by cargo-workspaces
2024-03-22 20:51:30 +01:00
Tony Garnock-Jones 0b72b4029b Repair reimported, attenuated references. 2024-03-22 20:51:02 +01:00
Tony Garnock-Jones 40a239c9eb Release independent packages
syndicate-server@0.42.0

Generated by cargo-workspaces
2024-03-22 11:24:21 +01:00
Tony Garnock-Jones 55456621d4 Handle refinement to gatekeeper protocol allowing JIT binding and/or direct rejection 2024-03-22 11:22:58 +01:00
Tony Garnock-Jones 7797a3cd09 Updated description of gatekeeper protocol 2024-03-22 10:11:57 +01:00
Tony Garnock-Jones eb9d9bed0f Generalize target-stompling-avoidance originally only for docker 2024-03-08 10:59:45 +01:00
Tony Garnock-Jones b96c469ef5 Put release profile settings back the way they should be 2024-03-08 10:51:04 +01:00
Tony Garnock-Jones 34f611f4fe Release independent packages
syndicate@0.36.0
syndicate-macros@0.29.0
syndicate-schema-plugin@0.6.0
syndicate-server@0.41.0
syndicate-tools@0.15.0

Generated by cargo-workspaces
2024-03-08 10:48:11 +01:00
Tony Garnock-Jones 58c24c30c4 Update Preserves to 0.995 2024-03-08 10:47:52 +01:00
Tony Garnock-Jones fa990bc042 Implement a $control entity, a message <exit n>, and a --control command-line flag. 2024-03-07 09:27:58 +01:00
Tony Garnock-Jones 060ba36d2e Release independent packages
syndicate-macros@0.28.1

Generated by cargo-workspaces
2024-03-04 10:15:51 +01:00
Tony Garnock-Jones ecd5e87823 Bump deps 2024-03-04 10:15:36 +01:00
Tony Garnock-Jones a401e5fcd1 A little fairer 2024-03-04 10:11:17 +01:00
Tony Garnock-Jones 5db05b2df2 Release independent packages
syndicate@0.35.0
syndicate-macros@0.28.0
syndicate-schema-plugin@0.5.0
syndicate-server@0.40.0
syndicate-tools@0.14.0

Generated by cargo-workspaces
2024-03-04 10:08:47 +01:00
Tony Garnock-Jones f4a4b4d595 Reuse a single Activation per actor: this merges RunningActor with Activation 2024-03-04 10:07:31 +01:00
Tony Garnock-Jones b7d4bd4b58 Avoid uselessly computing turn descriptions when there is no listener for them 2024-03-03 14:15:56 +01:00
Tony Garnock-Jones 41cf85f865 tokio-ring.rs 2024-03-03 10:34:25 +01:00
Tony Garnock-Jones 4fcb14d63e Latency-mode for syndicate-macros/example/ring.rs 2024-03-03 10:34:18 +01:00
Tony Garnock-Jones b4f355aa0d Oops, had ExitStatus without derive Debug 2024-02-24 21:58:56 +01:00
Tony Garnock-Jones 5a431b2060 Clean up imports 2024-02-24 21:58:46 +01:00
Tony Garnock-Jones 1ff222b291 Demote terminate-on-drop to a debug message rather than an error 2024-02-24 13:08:32 +01:00
Tony Garnock-Jones e501d0f76a Repair warnings 2024-02-24 13:06:22 +01:00
Tony Garnock-Jones 2e65d31d5d Release independent packages
syndicate@0.34.0
syndicate-macros@0.27.0
syndicate-schema-plugin@0.4.0
syndicate-server@0.39.0
syndicate-tools@0.13.0

Generated by cargo-workspaces
2024-02-05 23:41:53 +01:00
Tony Garnock-Jones 852f0f4722 Switch embedded from `#!` to `#:` 2024-02-05 23:40:44 +01:00
Tony Garnock-Jones 9850c73993 Merge latest changes from the syndicate-protocols repository 2024-02-05 23:34:05 +01:00
Tony Garnock-Jones 9864ce0ec8 Switch `#!` to `#:` 2024-02-05 23:14:19 +01:00
Tony Garnock-Jones 19b1e84e43 Update deps 2024-02-03 15:25:51 +01:00
Tony Garnock-Jones 3649cc1237 Release independent packages
syndicate@0.33.0
syndicate-macros@0.26.0
syndicate-schema-plugin@0.3.0
syndicate-server@0.38.0
syndicate-tools@0.12.0

Generated by cargo-workspaces
2024-02-03 15:24:55 +01:00
Tony Garnock-Jones 0f2d9239f9 Remove now-retired Float references 2024-02-03 15:24:28 +01:00
Tony Garnock-Jones 0514f11d0f Merge latest changes from the syndicate-protocols repository 2024-02-03 15:17:56 +01:00
Tony Garnock-Jones 12428bbdf6 Switch to Preserves 0.993 2024-02-03 15:17:14 +01:00
Tony Garnock-Jones 5dd68e87c1 Preserves 0.993 lacks float 2024-02-03 15:16:23 +01:00
Tony Garnock-Jones e2a32b891d Release independent packages
syndicate@0.32.0
syndicate-macros@0.25.1
syndicate-schema-plugin@0.2.2
syndicate-server@0.37.0
syndicate-tools@0.11.0

Generated by cargo-workspaces
2024-02-03 15:03:21 +01:00
Tony Garnock-Jones 461ac034f8 Avoid double-execution within a round; see syndicate-lang/syndicate-js#3 2023-12-19 23:12:13 +13:00
Tony Garnock-Jones 19cbceda7a Merge latest changes from the syndicate-protocols repository 2023-12-19 21:38:12 +13:00
Tony Garnock-Jones 97876335ba Save a few bytes on the wire. If not now, never I guess 2023-12-19 21:37:41 +13:00
Tony Garnock-Jones d7b330e6dd stdenv.prs 2023-12-04 22:25:40 +01:00
Tony Garnock-Jones 3cbe17790d Release independent packages
syndicate-server@0.36.1

Generated by cargo-workspaces
2023-11-26 00:27:58 +01:00
Tony Garnock-Jones 1d97ed1b55 Retract request assertions for completed HTTP requests 2023-11-26 00:27:45 +01:00
Tony Garnock-Jones 15914aa153 Another way to do it 2023-11-24 14:38:25 +01:00
Tony Garnock-Jones 4f42bbe7b6 Bump deps (specifically preserves) 2023-11-24 14:26:47 +01:00
Tony Garnock-Jones 9c32a4a4b8 Release independent packages
syndicate@0.31.1
syndicate-schema-plugin@0.2.1
syndicate-server@0.36.0
syndicate-tools@0.10.0

Generated by cargo-workspaces
2023-11-24 14:05:05 +01:00
Tony Garnock-Jones 56f04786ab New gatekeeper internal-service, for partitioning access 2023-11-24 14:04:33 +01:00
Tony Garnock-Jones 545e247c21 Add `--caveat` option to `syndicate-macaroon mint` 2023-11-24 13:23:20 +01:00
Tony Garnock-Jones 06f16d42ec Bump preserves-schema dep 2023-11-18 16:29:25 +01:00
Tony Garnock-Jones fe861e516f Release independent packages
syndicate-server@0.35.2

Generated by cargo-workspaces
2023-11-17 12:55:38 +01:00
Tony Garnock-Jones 13c841ce6e Don't enable HTTP from the command-line -p flag. Closes #3. 2023-11-17 12:55:04 +01:00
Tony Garnock-Jones 9ae1be6f56 Further tweak logging 2023-11-17 12:53:49 +01:00
Tony Garnock-Jones 9786bcb285 Release independent packages
syndicate-server@0.35.1

Generated by cargo-workspaces
2023-11-17 12:50:32 +01:00
Tony Garnock-Jones abb2978b9a Clean up logging 2023-11-17 12:50:17 +01:00
Tony Garnock-Jones b1e20ac706 Update README instructions 2023-11-15 21:06:19 +01:00
Tony Garnock-Jones 34b59cff3b Mention exposed port in Dockerfile 2023-11-15 17:52:16 +01:00
Tony Garnock-Jones d514a5178f Release independent packages
syndicate@0.31.0
syndicate-macros@0.25.0
syndicate-schema-plugin@0.2.0
syndicate-server@0.35.0
syndicate-tools@0.9.0

Generated by cargo-workspaces
2023-11-15 12:07:44 +01:00
Tony Garnock-Jones e88c335735 Bump version 2023-11-15 12:06:03 +01:00
Tony Garnock-Jones a38765affa Static file service 2023-11-14 00:56:10 +01:00
Tony Garnock-Jones 65dae05890 Multiplex regular HTTP on existing TCP/WebSocket connections 2023-11-13 21:52:27 +01:00
Tony Garnock-Jones 090ac8780f Add "KeepAlive" for when a driver is still getting ready to expose an Entity but hasn't done so yet. 2023-11-12 10:14:54 +01:00
Tony Garnock-Jones bbaacd3038 Cargo.lock 2023-11-11 01:36:26 +01:00
Tony Garnock-Jones 1d61ea0c8e Generic pattern_plugin implementation 2023-11-10 23:19:22 +01:00
Tony Garnock-Jones 1e9e60207b Release independent packages
syndicate@0.30.0
syndicate-macros@0.25.0
syndicate-schema-plugin@0.1.0
syndicate-server@0.34.1

Generated by cargo-workspaces
2023-11-10 22:55:47 +01:00
Tony Garnock-Jones 702057023d Split out syndicate-schema-plugin 2023-11-10 22:54:29 +01:00
Tony Garnock-Jones 1f7930d31a ring.rs 2023-11-08 19:30:26 +01:00
Tony Garnock-Jones 764fb3b866 Remove (trivial) unnecessary clone 2023-11-07 00:40:43 +01:00
Tony Garnock-Jones 726265132f Small initial capacity 2023-11-07 00:11:59 +01:00
Tony Garnock-Jones f6b6dd25f1 Small performance win from avoiding use of HashMap in single-receiver case 2023-11-06 23:54:59 +01:00
Tony Garnock-Jones 94c7de2a08 Bump deps 2023-11-01 00:20:50 +01:00
Tony Garnock-Jones e4c2634088 Release independent packages
syndicate@0.30.0
syndicate-macros@0.25.0
syndicate-server@0.34.0
syndicate-tools@0.9.0

Generated by cargo-workspaces
2023-10-31 22:58:28 +01:00
Tony Garnock-Jones cbaeba7bba Update for Preserves 0.991 2023-10-31 22:58:02 +01:00
Tony Garnock-Jones f8c76e9230 Merge latest changes from the syndicate-protocols repository 2023-10-31 22:54:40 +01:00
Tony Garnock-Jones fe9ceaf65c Update comment syntax for Preserves 0.991 2023-10-31 21:56:44 +01:00
Tony Garnock-Jones 60e6c6badf Avoid spurious "Invalid Preserves tag 0" message when server quits before sending anything 2023-10-19 12:40:38 +02:00
Tony Garnock-Jones 2bf2e29dc2 Release independent packages
syndicate@0.29.1
syndicate-server@0.33.2
syndicate-tools@0.8.1

Generated by cargo-workspaces
2023-10-18 22:51:15 +02:00
Tony Garnock-Jones 9a148ecfcc Good grief, I forgot to update the preserves crate versions 2023-10-18 22:50:54 +02:00
Tony Garnock-Jones 2104bc1ff0 Release independent packages
syndicate-server@0.33.1

Generated by cargo-workspaces
2023-10-18 14:22:50 +02:00
Tony Garnock-Jones 17a9c96342 Update protocols for preserves 0.990 2023-10-18 14:22:18 +02:00
Tony Garnock-Jones 3c4ba48624 Release independent packages
syndicate@0.29.0
syndicate-macros@0.24.0
syndicate-server@0.33.0
syndicate-tools@0.8.0

Generated by cargo-workspaces
2023-10-18 14:03:54 +02:00
Tony Garnock-Jones e063a3f84d Merge latest changes from the syndicate-protocols repository 2023-10-18 14:02:38 +02:00
Tony Garnock-Jones 72566ac223 Update for Preserves 0.990 2023-10-18 14:02:28 +02:00
Tony Garnock-Jones 4e30ef48dc Add syndicate-tools to fixtags.sh 2023-10-05 10:01:09 +02:00
Tony Garnock-Jones d66840bae7 Update internal dependencies 2023-10-05 09:59:31 +02:00
Tony Garnock-Jones 768fdd6448 Release independent packages
syndicate@0.28.3
syndicate-macros@0.23.2
syndicate-server@0.32.2
syndicate-tools@0.7.1

Generated by cargo-workspaces
2023-10-05 09:57:24 +02:00
Tony Garnock-Jones 8055895319 BUMP_ARGS 2023-10-05 09:56:39 +02:00
Tony Garnock-Jones a83999d6ed Build each docker image with a separate target directory, because it turns out they seem to pollute each other if they all share one! 2023-10-05 09:53:53 +02:00
Tony Garnock-Jones 1f7b7a02b1 Enable jemalloc feature for simple benchmarking 2023-10-05 09:53:27 +02:00
Tony Garnock-Jones 24b6217897 Make jemalloc optional 2023-10-05 09:47:22 +02:00
Tony Garnock-Jones d517fc4e92 Bump deps 2023-10-05 09:44:07 +02:00
Tony Garnock-Jones a0c40eadd0 Update lockfile 2023-10-05 08:01:55 +02:00
Tony Garnock-Jones fc420d1a86 Bump to pick up macro version bump 2023-10-04 23:24:12 +02:00
Tony Garnock-Jones f3e5652eee New release of syndicate-macros to pick up syn feature flag changes 2023-10-04 22:41:17 +02:00
Tony Garnock-Jones 538ad4244c Hmm the perf increase from mold may have been illusory 2023-10-04 22:00:01 +02:00
Tony Garnock-Jones 1cb2eba0e4 Release independent packages
syndicate-server@0.32.0

Generated by cargo-workspaces
2023-10-04 21:48:35 +02:00
Tony Garnock-Jones a9971fc35a Note about `mold` 2023-10-04 21:48:14 +02:00
Tony Garnock-Jones 8dead81cef 50% performance boost from jemalloc! 2023-10-04 21:28:47 +02:00
Tony Garnock-Jones 16681841a7 Bump version 2023-09-29 14:56:55 +02:00
Tony Garnock-Jones 97fdfe6136 noise mode for syndicate-macaroon 2023-09-29 14:56:35 +02:00
Tony Garnock-Jones c26b67f286 docker-compose.yml 2023-09-29 13:56:09 +02:00
Tony Garnock-Jones 65db64fce1 Update quickstart 2023-09-29 13:55:44 +02:00
Tony Garnock-Jones 0432f8a04a Multiarch docker builds 2023-09-29 13:54:05 +02:00
Tony Garnock-Jones dd69d5caaa A different workaround for https://github.com/dtolnay/proc-macro2/issues/402 2023-09-29 09:42:12 +02:00
Tony Garnock-Jones e6bc6d091f Bump dependencies 2023-09-27 23:31:51 +02:00
Tony Garnock-Jones 4c9505d28e Get the project building again 2023-09-27 23:28:06 +02:00
Tony Garnock-Jones a74cd19526 Remove apparently-useless drop() call 2023-05-26 13:52:31 +02:00
Tony Garnock-Jones 5f3558817e Workaround for rust-embedded/cross issue 598 is no longer required 2023-05-12 11:07:10 +02:00
Tony Garnock-Jones b4a3f743b5 Bump deps; enable extra-traits in syn for Debug impl for syn::Expr and syn::Type 2023-05-12 10:33:15 +02:00
Tony Garnock-Jones a340b127d7 Release independent packages
syndicate@0.28.2

Generated by cargo-workspaces
2023-02-11 21:53:28 +01:00
Tony Garnock-Jones 08486b4b1c Merge latest changes from the syndicate-protocols repository 2023-02-11 21:52:34 +01:00
Tony Garnock-Jones d8a139b23a Switch back to transport sequence representation 2023-02-11 21:49:49 +01:00
Tony Garnock-Jones 990f3fe4cb Release independent packages
syndicate@0.28.1

Generated by cargo-workspaces
2023-02-11 17:45:50 +01:00
Tony Garnock-Jones 3a3c3c0ee4 Merge latest changes from the syndicate-protocols repository 2023-02-11 17:44:34 +01:00
Tony Garnock-Jones 46fd2dec3b Set of any for transports in gatekeeper.Route 2023-02-11 17:43:42 +01:00
Tony Garnock-Jones 7d7b3135ba Release independent packages
syndicate@0.28.0
syndicate-macros@0.23.0
syndicate-server@0.31.0
syndicate-tools@0.6.0

Generated by cargo-workspaces
2023-02-10 16:44:38 +01:00
Tony Garnock-Jones 06d52c43da Merge latest changes from the syndicate-protocols repository 2023-02-09 23:07:58 +01:00
Tony Garnock-Jones 1ae2583414 Remove accidental self-qualification 2023-02-09 23:07:43 +01:00
Tony Garnock-Jones 4dca1b1615 More updates to gatekeeper protocol 2023-02-09 00:17:12 +01:00
Tony Garnock-Jones 45406c75ac Merge latest changes from the syndicate-protocols repository 2023-02-08 23:44:22 +01:00
Tony Garnock-Jones f3c9662607 Another small error 2023-02-08 23:43:51 +01:00
Tony Garnock-Jones f134d0227d Merge latest changes from the syndicate-protocols repository 2023-02-08 23:39:53 +01:00
Tony Garnock-Jones 82624d3007 Another small error 2023-02-08 23:39:42 +01:00
Tony Garnock-Jones 8de00045e6 Merge latest changes from the syndicate-protocols repository 2023-02-08 23:36:37 +01:00
Tony Garnock-Jones 8b690b9103 Repair minor error 2023-02-08 23:36:21 +01:00
Tony Garnock-Jones f8d1acfa3e Merge latest changes from the syndicate-protocols repository 2023-02-08 23:11:49 +01:00
Tony Garnock-Jones 5a52f243e5 Adjust steps in noise and sturdy 2023-02-08 23:11:05 +01:00
Tony Garnock-Jones 6224baa2b6 Avoid variable-arity steps 2023-02-08 23:04:42 +01:00
Tony Garnock-Jones 00c99d96df Simplify 2023-02-08 22:35:34 +01:00
Tony Garnock-Jones 6ec6bbaf41 Incorporate Step, Description 2023-02-08 22:27:41 +01:00
Tony Garnock-Jones ddc94bfa60 Merge latest changes from the syndicate-protocols repository 2023-02-08 22:12:01 +01:00
Tony Garnock-Jones 8619342e5e Refinements 2023-02-08 22:11:45 +01:00
Tony Garnock-Jones 5bcb268ff8 Adjust ResolvePath/TransportConnection/PathStep 2023-02-08 20:36:14 +01:00
Tony Garnock-Jones 7e8dcef0e2 Refactor gatekeeper implementation for new protocols. 2023-02-08 18:01:51 +01:00
Tony Garnock-Jones 9a5d452754 Merge latest changes from the syndicate-protocols repository 2023-02-08 17:47:01 +01:00
Tony Garnock-Jones 9cd2e6776c Refactor gatekeeper protocols. 2023-02-08 17:46:47 +01:00
Tony Garnock-Jones c0d4b535a3 Merge latest changes from the syndicate-protocols repository 2023-02-08 14:35:19 +01:00
Tony Garnock-Jones 3c1cb11779 Allow override of PROTOCOLS_BRANCH 2023-02-08 14:35:15 +01:00
Tony Garnock-Jones a086c1d721 Repair typo 2023-02-07 13:18:18 +01:00
Tony Garnock-Jones bc41182533 Another small repair 2023-02-07 13:11:14 +01:00
Tony Garnock-Jones 2ad99b56b8 Be more precise about HMAC-BLAKE2s-256 and the key length 2023-02-07 12:44:47 +01:00
Tony Garnock-Jones a2013287db Release independent packages
syndicate@0.27.0
syndicate-macros@0.22.0
syndicate-server@0.30.0
syndicate-tools@0.5.0

Generated by cargo-workspaces
2023-02-06 18:15:03 +01:00
Tony Garnock-Jones 7de2752068 Switch to HMAC-BLAKE2s 2023-02-06 17:09:17 +01:00
Tony Garnock-Jones d2c783927c Merge latest changes from the syndicate-protocols repository 2023-02-06 16:31:50 +01:00
Tony Garnock-Jones f6b88ee3fb Switch to HMAC-BLAKE2s 2023-02-06 16:19:03 +01:00
Tony Garnock-Jones ee8a23aa2e Switch from milliseconds to seconds. Fixes #1 2023-02-06 15:36:17 +01:00
Tony Garnock-Jones 833be7b293 Update attenuations 2023-02-06 14:48:18 +01:00
Tony Garnock-Jones 12eaeb8f62 Merge latest changes from the syndicate-protocols repository 2023-02-06 13:35:51 +01:00
Tony Garnock-Jones 5cd0335a79 Argh, previous commit won't work 2023-02-06 11:06:02 +01:00
Tony Garnock-Jones b52da09081 More usable (?) rewrite language 2023-02-06 10:58:16 +01:00
Tony Garnock-Jones 9ca618268e Simplify attenuations 2023-02-06 10:45:41 +01:00
Tony Garnock-Jones 1879c52963 Merge latest changes from the syndicate-protocols repository 2023-02-04 17:09:55 +01:00
Tony Garnock-Jones 9f1f76d0ca Remove racketEvent.prs 2023-02-04 16:30:27 +01:00
Tony Garnock-Jones f4078aabaa Update binary bundle 2023-02-04 13:46:49 +01:00
Tony Garnock-Jones 557a36756f First step of cleanup of protocols 2023-02-04 13:46:34 +01:00
Tony Garnock-Jones 9f88765cf7 Release independent packages
syndicate@0.26.2
syndicate-server@0.29.2

Generated by cargo-workspaces
2023-01-31 15:12:12 +01:00
Tony Garnock-Jones 2a11bc6bbb Use bundled bundle, rather than external file, which isn't found in published crate build 2023-01-31 15:11:50 +01:00
Tony Garnock-Jones 1dac3e5a19 Release independent packages
syndicate@0.26.1
syndicate-server@0.29.1

Generated by cargo-workspaces
2023-01-31 14:21:29 +01:00
Tony Garnock-Jones 2382157039 Oops. Wrong dep on preserves-schema 2023-01-31 14:21:18 +01:00
Tony Garnock-Jones 69c526436f Release independent packages
syndicate@0.26.0
syndicate-macros@0.21.0
syndicate-server@0.29.0
syndicate-tools@0.4.0

Generated by cargo-workspaces
2023-01-31 14:13:06 +01:00
Tony Garnock-Jones 9761e68bd0 Bump 2023-01-31 14:10:57 +01:00
Tony Garnock-Jones 4becf23caa Switch from snow to noise-protocol; Noise responder implementation 2023-01-30 17:30:44 +01:00
Tony Garnock-Jones 94040ae566 More ergonomic guard api 2023-01-30 17:29:25 +01:00
Tony Garnock-Jones c3571a2faf Expose a more flexible interface to relays 2023-01-30 17:28:20 +01:00
Tony Garnock-Jones dbbbc8c1c6 Breaking change: much improved error API 2023-01-30 14:25:58 +01:00
Tony Garnock-Jones 3dea29ffe4 Repair macro for syndicate patterns involving dicts and seqs 2023-01-30 09:38:43 +01:00
Tony Garnock-Jones f3424c160d Groundwork for handling noise connects 2023-01-28 22:45:48 +01:00
Tony Garnock-Jones 049ef9aea7 Merge latest changes from the syndicate-protocols repository 2023-01-27 12:52:58 +01:00
Tony Garnock-Jones 07a5f688be Repair binary bundle 2023-01-27 12:52:07 +01:00
Tony Garnock-Jones 48c61098c4 Merge latest changes from the syndicate-protocols repository 2023-01-27 12:49:17 +01:00
Tony Garnock-Jones fff84d4c2a Update noise mapping 2023-01-27 12:45:02 +01:00
Tony Garnock-Jones bc62cab348 Bump deps 2023-01-27 09:42:41 +01:00
Tony Garnock-Jones 5983cd01f1 Another note re noise 2023-01-23 13:08:12 +01:00
Tony Garnock-Jones e8881f5980 Now I have actually implemented Noise, revise the schema 2023-01-19 12:18:58 +01:00
Tony Garnock-Jones 40b4681a6e Ugh, xsalsa20poly1305 as an AEAD isn't a thing 2023-01-16 16:21:12 +01:00
Tony Garnock-Jones 0f5e033174 noise 2023-01-16 15:52:46 +01:00
Tony Garnock-Jones aae53b5525 Update precompiled form 2023-01-16 15:51:57 +01:00
Tony Garnock-Jones fce32a589c Release independent packages
syndicate@0.25.0
syndicate-macros@0.20.0
syndicate-server@0.28.0
syndicate-tools@0.3.0

Generated by cargo-workspaces
2023-01-16 15:05:48 +01:00
Tony Garnock-Jones bae21fb69b Update deps; in particular, get preserves 3.0, which has the fixed numerics/symbols syntax 2023-01-16 15:03:35 +01:00
Tony Garnock-Jones 25ef92f78e Include syndicate package version in syndicate-server version display 2023-01-09 09:30:46 +01:00
Tony Garnock-Jones 2f6f1dde26 Release independent packages
syndicate@0.24.3

Generated by cargo-workspaces
2023-01-09 09:21:13 +01:00
Tony Garnock-Jones b5564979f0 Repair error in sync handling 2023-01-09 09:20:58 +01:00
Tony Garnock-Jones 5ca6bdb3bb Release independent packages
syndicate@0.24.2

Generated by cargo-workspaces
2023-01-08 13:19:21 +01:00
Tony Garnock-Jones 11b5a187b9 Fix tag format template 2023-01-08 13:19:06 +01:00
Tony Garnock-Jones 1cb89f0b6b Pick up preserves bugfix around schematized embedded-ref deserialization 2023-01-08 13:17:46 +01:00
Tony Garnock-Jones 4c03646567 HTTP 2022-12-13 18:08:34 +13:00
Tony Garnock-Jones 90940b3c3d Bump preserves version 2022-10-26 16:03:30 +02:00
Tony Garnock-Jones eb2bd3cf8e Release independent packages
syndicate@0.24.1
syndicate-macros@0.19.1
syndicate-server@0.27.1
syndicate-tools@0.2.1

Generated by cargo-workspaces
2022-10-26 13:46:28 +02:00
Tony Garnock-Jones 451a298f94 Oops, want independent versioning 2022-10-26 13:45:48 +02:00
Tony Garnock-Jones 181523d05c Redo using clap derive instead of builder 2022-10-26 13:44:31 +02:00
Tony Garnock-Jones 4ce2093e52 Bump deps (specifically to get preserves hex bugfix) 2022-10-26 13:42:44 +02:00
Tony Garnock-Jones 2f3b186262 Switch to cargo-workspaces 2022-10-26 13:41:46 +02:00
Tony Garnock-Jones e21485c44d (cargo-release) version 0.2.0 2022-10-24 15:14:07 +02:00
Tony Garnock-Jones 86347412e7 (cargo-release) version 0.19.0 2022-10-24 15:14:07 +02:00
Tony Garnock-Jones 2d46d87f58 (cargo-release) version 0.27.0 2022-10-24 15:14:07 +02:00
Tony Garnock-Jones 54103f87eb (cargo-release) version 0.24.0 2022-10-24 15:14:06 +02:00
Tony Garnock-Jones 4a6bb3e143 Bump preserves-schema 2022-10-24 15:10:37 +02:00
Tony Garnock-Jones cdfe157fd9 Cargo update 2022-10-18 20:54:51 +02:00
Tony Garnock-Jones fbfafc1d1d (cargo-release) version 0.1.0 2022-10-18 14:14:30 +02:00
Tony Garnock-Jones e1eb7ae3dd Prepare for syndicate-tools v0.1.0 release 2022-10-18 14:14:10 +02:00
Tony Garnock-Jones f2be0d5e62 Cosmetic: remove unwanted comment 2022-10-18 14:06:07 +02:00
Tony Garnock-Jones fc930059d3 syndicate-macaroon 2022-10-18 14:05:12 +02:00
Tony Garnock-Jones bcaf08c602 (cargo-release) version 0.26.0 2022-07-22 18:14:08 +02:00
Tony Garnock-Jones 9293bd3904 (cargo-release) version 0.25.0 2022-07-22 18:13:24 +02:00
Tony Garnock-Jones bf1552d9a8 Use busybox as base rather than a completely empty image, for convenience 2022-05-25 11:02:33 +02:00
Tony Garnock-Jones a7ec157437 Update docker scripting 2022-05-24 17:00:02 +02:00
Tony Garnock-Jones ccfcf6ec26 Docker syndicate-server 2022-05-24 16:51:54 +02:00
Tony Garnock-Jones af679531b4 Bump deps for a ~1% speed boost from tracing 0.1.32 2022-03-09 19:20:39 +01:00
Tony Garnock-Jones ec8ba36d6a Add `stringify` quasi-function 2022-03-01 10:02:30 +01:00
Tony Garnock-Jones ec453b7db7 (cargo-release) version 0.24.0 2022-02-06 23:03:51 +01:00
Tony Garnock-Jones efb76bfe91 Add "never" restart policy 2022-02-06 23:03:21 +01:00
Tony Garnock-Jones fb31ea44cf fixtags.sh 2022-02-04 17:06:18 +01:00
Tony Garnock-Jones d75bfe4e35 (cargo-release) version 0.18.0 2022-02-04 17:00:18 +01:00
Tony Garnock-Jones 393514fb3a (cargo-release) version 0.23.0 2022-02-04 17:00:18 +01:00
Tony Garnock-Jones 406f22703b (cargo-release) version 0.23.0 2022-02-04 17:00:18 +01:00
Tony Garnock-Jones 4f0145e161 Sort directory entries in config scan 2022-02-04 16:59:29 +01:00
Tony Garnock-Jones b09fbdceec Remove hardcoded milestones and system-layer notions 2022-02-04 16:00:15 +01:00
Tony Garnock-Jones b556414fec Merge latest changes from the syndicate-protocols repository 2022-02-04 14:27:02 +01:00
Tony Garnock-Jones ca92d99c52 Remove notion of "system-layer-service" from core protocols 2022-02-04 14:26:50 +01:00
Tony Garnock-Jones 98c76df2f7 Repair accidentally-committed reference to local path (!) 2022-02-04 14:15:28 +01:00
Tony Garnock-Jones 0a0d977a48 Bump deps 2022-02-04 14:13:08 +01:00
Tony Garnock-Jones 8a0675d8ee (cargo-release) version 0.22.0 2022-02-04 14:02:10 +01:00
Tony Garnock-Jones af2578f887 (cargo-release) version 0.17.0 2022-02-04 14:02:10 +01:00
Tony Garnock-Jones 84ebf530d3 (cargo-release) version 0.22.0 2022-02-04 14:02:10 +01:00
Tony Garnock-Jones f88592282d MAJOR REFACTORING OF CORE ASSERTION-TRACKING STRUCTURES. Little impact on API. Read on for details.
2022-02-01 15:22:30 Two problems.

 - If a stop action panics (in `_terminate_facet`), the Facet is dropped before its outbound
   handles are removed. With the code as it stands, this leaks assertions (!!).

 - The logic for removing an outbound handle seems to be running in the wrong facet context???
   (See `f.outbound_handles.remove(&handle)` in the cleanup actions
    - I think I need to remove the for_myself mechanism
    - and add some callbacks to run only on successful commit

2022-02-02 12:12:33 This is hard.

Here's the current implementation:

 - assert
    - inserts into outbound_handles of active facet
    - adds cleanup action describing how to do the retraction
    - enqueues the assert action, which
       - calls e.assert()

 - retract
    - looks up & removes the cleanup action, which
       - enqueues the retract action, which
          - removes from outbound_handles of the WRONG facet in the WRONG actor
          - calls e.retract()

 - _terminate_facet
    - uses outbound_handles to retract the facet's assertions
    - doesn't directly touch cleanup actions, relying on retract to do that
    - if one of a facet's stop actions panics, will drop the facet, leaking its assertions
    - actually, even if a stop action yields `Err`, it will drop the facet and leak assertions
    - yikes

 - facet drop
    - panics if outbound_handles is nonempty

 - actor cleanup
    - relies on facet tree to find assertions to retract

Revised plan:

 - ✓ revise Activation/PendingEvents structures
    - rename `cleanup_actions` to `outbound_assertions`
    - remove `for_myself` queues and `final_actions`
    - add `pre_commit_actions`, `rollback_actions` and `commit_actions`

 - ✓ assert
    - as before
    - but on rollback, removes from `outbound_handles` (if the facet still exists) and
      `outbound_assertions` (always)
    - marks the new assertion as "established" on commit

 - ✓ retract
    - lookup in `outbound_assertions` by handle, using presence as indication it hasn't been
      scheduled in this turn
    - on rollback, put it back in `outbound_assertions` ONLY IF IT IS MARKED ESTABLISHED -
      otherwise it is a retraction of an `assert` that has *also* been rolled back in this turn
    - on commit, remove it from `outbound_handles`
    - enqueue the retract action, which just calls e.retract()

 - ✓ _terminate_facet
    - revised quite a bit now we rely on `RunningActor::cleanup` to use `outbound_assertions`
      rather than the facet tree.
    - still drops Facets on panic, but this is now mostly harmless (reorders retractions a bit)
    - handles `Err` from a stop action more gracefully
    - slightly cleverer tracking of what needs doing based on a `TerminationDirection`
    - now ONLY applies to ORDERLY cleanup of the facet tree. Disorderly cleanup ignores the
      facet tree and just retracts the assertions willy-nilly.

 - ✓ facet drop
    - warn if outbound_handles is nonempty, but don't do anything about it

 - ✓ actor cleanup
    - doesn't use the facet tree at all.
    - cleanly shutting down is done elsewhere
    - uses the remaining entries in `outbound_assertions` (previously `cleanup_actions`) to
      deal with retractions for dropped facets as well as any other facets that haven't been
      cleanly shut down

 - ✓ activate
    - now has a panic_guard::PanicGuard RAII for conveying a crash to an actor in case the
      activation is happening from a linked task or another thread (this wasn't the case in the
      examples that provoked this work, though)
    - simplified
    - explicit commit/rollback decision

 - ✓ Actor::run
    - no longer uses the same path for crash-termination and success-termination
    - instead, for success-termination, takes a turn that calls Activation::stop_root
       - this cleans up the facet tree using _terminate_facet
       - when the turn ends, it notices that the root facet is gone and shuts down the actor
       - so in principle there will be nothing for actor cleanup to do

2022-02-04 13:52:34 This took days. :-(
2022-02-04 13:59:37 +01:00
Tony Garnock-Jones 98731ba968 Merge latest changes from the syndicate-protocols repository 2022-02-03 22:57:58 +01:00
Tony Garnock-Jones d820601eea Better trace messages from dependency tracking 2022-02-03 22:57:21 +01:00
Tony Garnock-Jones 28b0c5b4d5 One-shot daemons shouldn't be considered ready at all, just complete 2022-02-03 22:56:20 +01:00
Tony Garnock-Jones 19c96bdef2 Allow userDefined states 2022-02-03 22:55:06 +01:00
Tony Garnock-Jones 99a027dc26 Remove unwanted commented-out code 2022-02-03 15:59:19 +01:00
Tony Garnock-Jones 9add501124 Remove the (no-op) rollback entirely 2022-02-02 12:21:43 +01:00
Tony Garnock-Jones 38a5279827 Include facet ID in panic message when nonempty outbound_handles at drop time 2022-02-02 12:10:33 +01:00
Tony Garnock-Jones 1244e416d0 clear/deliver -> rollback/commit, and don't commit on drop 2022-02-02 12:10:13 +01:00
Tony Garnock-Jones d7a847de37 Refactor with_facet 2022-02-02 11:52:13 +01:00
Tony Garnock-Jones 4ea07cdd6b Further simplify supervision protocols 2022-01-26 23:37:43 +01:00
Tony Garnock-Jones 70c442ad47 Use a named unit struct instead of () 2022-01-26 23:37:21 +01:00
Tony Garnock-Jones 7e4654c8f7 Simplify and repair stdout/stderr logging in daemons 2022-01-26 23:37:04 +01:00
Tony Garnock-Jones 1111776754 Eliminate need for awkward boot_fn transmission subprotocol 2022-01-26 22:30:47 +01:00
Tony Garnock-Jones cc11120f23 Avoid erasing information immediately prior to it being needed (!) (when we can) 2022-01-26 22:12:45 +01:00
Tony Garnock-Jones e600d59f6e Conditional match expressions. I can't help but feel I'm committing some kind of crime against programming language design here. 2022-01-20 10:17:15 +01:00
Tony Garnock-Jones 9080dc6f1e Fill in the rest of the jolly owl 2022-01-20 10:12:04 +01:00
Tony Garnock-Jones a9f83e0a9d Merge latest changes from the syndicate-protocols repository 2022-01-20 10:12:00 +01:00
Tony Garnock-Jones ab34b62cf1 Refine the trace protocol a bit 2022-01-20 09:40:53 +01:00
Tony Garnock-Jones 4dc613a091 Foundations for causal tracing 2022-01-19 14:40:50 +01:00
Tony Garnock-Jones f7a5edff39 Merge latest changes from the syndicate-protocols repository 2022-01-19 14:36:09 +01:00
Tony Garnock-Jones 5a65256cf3 Syndicate traces 2022-01-19 14:24:21 +01:00
Tony Garnock-Jones 650463ff20 Accommodate extension point 2022-01-17 00:32:16 +01:00
Tony Garnock-Jones c951cea508 Merge latest changes from the syndicate-protocols repository 2022-01-17 00:26:10 +01:00
Tony Garnock-Jones 257c604e2b Repair bad record pattern 2022-01-17 00:22:10 +01:00
Tony Garnock-Jones a06d532006 Extension point. Closes #2 2022-01-16 21:17:36 +01:00
Tony Garnock-Jones 45f9abfd97 (cargo-release) version 0.21.0 2022-01-16 15:15:51 +01:00
Tony Garnock-Jones 894f0a648a (cargo-release) version 0.16.0 2022-01-16 15:15:51 +01:00
Tony Garnock-Jones e6a2a25f62 (cargo-release) version 0.21.0 2022-01-16 15:15:51 +01:00
Tony Garnock-Jones 3d3c1ebf70 Better handling of activation after termination, which repairs a scary-looking-but-harmless panic in config_watcher's private thread 2022-01-16 00:02:33 +01:00
Tony Garnock-Jones a37a2739a0 Log compiled instructions in config_watcher 2022-01-15 23:23:48 +01:00
Tony Garnock-Jones 11894ecb70 Better tracing of supervisor activity 2022-01-15 23:23:18 +01:00
Tony Garnock-Jones b810784750 Script `+=` operator; sketch of `=~` operator 2022-01-15 23:22:51 +01:00
Tony Garnock-Jones 9453408e42 Propagate script compilation errors properly. 2022-01-15 23:22:13 +01:00
Tony Garnock-Jones 2b296d79c7 Repair error in dataspace assertion idempotency.
If a facet, during X, asserts X, for all X, then X includes all
`Observe` assertions. Assertion of X should be a no-op (though
subsequent retractions of X will have no effect!) since duplicates are
ignored. However, the implementation had been ignoring whether it had
seen `Observe` assertions before, and was *always* (re)placing them
into the index, leading to runaway growth.

The repair is to only process `Observe` records on first assertion and
last retraction.

As part of this change, Dataspaces have been given names, and some
cruft from the previous implementation has been removed.
2022-01-15 23:18:29 +01:00
Tony Garnock-Jones af4af8b048 Bump deps 2022-01-14 15:55:30 +01:00
Tony Garnock-Jones 78ef7c07db documentation.prs 2022-01-14 15:36:41 +01:00
Tony Garnock-Jones 6325538ea6 (cargo-release) version 0.20.1 2022-01-12 12:28:38 +01:00
Tony Garnock-Jones 7fbe6360e7 Support patterns like <?r <Something _ _ _>> 2022-01-12 12:28:03 +01:00
Tony Garnock-Jones d007da2e94 (cargo-release) version 0.20.0 2022-01-10 13:39:48 +01:00
Tony Garnock-Jones 08c7bd3808 (cargo-release) version 0.15.0 2022-01-10 13:39:48 +01:00
Tony Garnock-Jones 96cfb1d4e7 (cargo-release) version 0.20.0 2022-01-10 13:39:48 +01:00
Tony Garnock-Jones 2d179d1e46 Avoid racy approaches to actor-termination.
They're still there: you can use turn.state.shutdown(), which enqueues
a message for eventual actor shutdown. But it's better to use
turn.stop_root(), which terminates the actor's root facet within the
current turn, ensuring that the actor's exit_status is definitely set
by the time the turn has committed.

This is necessary to avoid a racy panic in supervision: before this
change, an asynchronous SystemMessage::Release was sent when the last
facet of an actor was stopped. Depending on load (!), any retractions
resulting from the shutdown would be delivered before the Release
arrived at the stopping actor. The supervision logic expected
exit_status to be definitely set by the time release() fired, which
wasn't always true. Now that in-turn shutdown has been implemented,
this is a reliable invariant.

A knock-on change is the need to remove
enqueue_for_myself_at_commit(), replacing it with a use of
pending.for_myself.push(). The old enqueue_for_myself_at_commit
approach could lead to lost actions as follows:

    A: start linked task T, which spawns a new tokio coroutine
            T: activate some facet in A and terminate A's root facet
            T: at this point, A transitions to "not running"
    A: spawn B, enqueuing a call to B's boot()
    A: commit turn. Deliveries for others go out as usual,
       but those for A will be discarded since A is "not running".
       This means that the call to B's boot() goes missing.

Using pending.for_myself.push() instead assures that B's boot will
always run at the end of A's turn, without regard for whether A is in
some terminated state.

I think that this kind of race could have happened before, but
something about switching away from shutdown() seems to trigger it
somewhat reliably.
2022-01-10 12:52:29 +01:00
Tony Garnock-Jones e06e5fef10 Put thread IDs in logging output 2022-01-10 12:52:12 +01:00
Tony Garnock-Jones c3a9525ef1 Track enough information to allow piecing-together of parent/child relationships among actors 2022-01-10 12:52:12 +01:00
Tony Garnock-Jones 58bde1e29d Add Activation::stop_root 2022-01-10 11:23:02 +01:00
Tony Garnock-Jones a6ea858f1c Belt and suspenders 2022-01-09 21:01:55 +01:00
Tony Garnock-Jones 55c3636ef2 Add x86_64-binary-debug target 2022-01-09 21:00:20 +01:00
Tony Garnock-Jones 76d4ffd8a2 (cargo-release) version 0.14.0 2022-01-08 16:05:47 +01:00
Tony Garnock-Jones 9f560b4dd0 (cargo-release) version 0.19.0 2022-01-08 16:05:47 +01:00
Tony Garnock-Jones fcb345dbaf (cargo-release) version 0.19.0 2022-01-08 16:05:47 +01:00
Tony Garnock-Jones 82ccbdb282 Simplify and correct facet stop logic; always run stop actions in parent facet context 2022-01-08 15:27:44 +01:00
Tony Garnock-Jones 0d25d76bec Split out (internal) on_facet_stop from on_stop 2022-01-08 15:26:34 +01:00
Tony Garnock-Jones 19b04b82a2 Improve documentation regarding stop/exit actions 2022-01-08 15:25:41 +01:00
Tony Garnock-Jones be27348d29 Activation::facet_ids 2022-01-08 15:24:10 +01:00
Tony Garnock-Jones 7524b634d3 Repair daemon service restarts 2022-01-08 13:54:25 +01:00
Tony Garnock-Jones 4eddcf7518 (cargo-release) version 0.13.0 2022-01-07 22:06:08 +01:00
Tony Garnock-Jones c29f46c117 (cargo-release) version 0.18.0 2022-01-07 22:06:08 +01:00
Tony Garnock-Jones ff827f9c38 (cargo-release) version 0.18.0 2022-01-07 22:06:08 +01:00
Tony Garnock-Jones 6f8fb014f2 Update daemon restart policy defaults to line up better with the new supervisor defaults 2022-01-07 22:05:12 +01:00
Tony Garnock-Jones 25e75324cf (cargo-release) version 0.17.0 2022-01-07 17:19:15 +01:00
Tony Garnock-Jones 02d832500f (cargo-release) version 0.12.0 2022-01-07 17:19:15 +01:00
Tony Garnock-Jones 5281da096c (cargo-release) version 0.17.0 2022-01-07 17:19:14 +01:00
Tony Garnock-Jones 41b1708cea Append a [] to config .pr files, for ergonomics of commenting (!) 2022-01-07 17:18:16 +01:00
Tony Garnock-Jones 895a2f676c lifecycle::terminate_on_service_restart; make debt reporter accept a parameter 2022-01-07 17:18:00 +01:00
Tony Garnock-Jones fce928b5b0 Warn on restart intensity excess 2022-01-07 17:16:20 +01:00
Tony Garnock-Jones 33a0a52d6b Change SupervisorConfiguration default to RestartPolicy::Always 2022-01-07 17:16:05 +01:00
Tony Garnock-Jones f956f3d994 Activation::every 2022-01-07 17:15:51 +01:00
Tony Garnock-Jones 1744a0a99a Update Makefile for latest preserves-schemac command line interface changes 2022-01-07 17:15:30 +01:00
Tony Garnock-Jones e92c2e6a7b `on_message!` macro, like `during!` 2022-01-07 17:15:03 +01:00
Tony Garnock-Jones ffcd851768 Merge latest changes from the syndicate-protocols repository 2022-01-07 15:29:32 +01:00
Tony Garnock-Jones e04b898c7f Adjustments to service.prs 2022-01-07 15:29:20 +01:00
Tony Garnock-Jones b465036773 (cargo-release) version 0.16.0 2021-12-13 20:35:43 +01:00
Tony Garnock-Jones 458c2795f9 (cargo-release) version 0.11.0 2021-12-13 20:35:43 +01:00
Tony Garnock-Jones 760314ee5e (cargo-release) version 0.16.0 2021-12-13 20:35:43 +01:00
Tony Garnock-Jones bbcc15c74d Fix length checks 2021-12-13 16:05:43 +01:00
Tony Garnock-Jones f5b1fec90f Follow simplifications to sturdy caveats 2021-12-13 16:00:25 +01:00
Tony Garnock-Jones 091ca088e0 Merge latest changes from the syndicate-protocols repository 2021-12-13 15:43:28 +01:00
Tony Garnock-Jones a831b02ca5 Accommodate changes to dataspacePatterns 2021-12-13 15:43:24 +01:00
Tony Garnock-Jones 5f60c22e49 More simplifications, to sturdy this time 2021-12-13 15:43:01 +01:00
Tony Garnock-Jones ea9e48cf31 Merge latest changes from the syndicate-protocols repository 2021-12-13 14:22:58 +01:00
Tony Garnock-Jones 49075e7e84 Embedded values count as atoms here 2021-12-13 14:22:32 +01:00
Tony Garnock-Jones aff9f46804 Merge latest changes from the syndicate-protocols repository 2021-12-13 13:50:23 +01:00
Tony Garnock-Jones b3e24d819c Experiment: stricter, simpler dataspacePatterns 2021-12-13 13:49:58 +01:00
Tony Garnock-Jones b2df99cbc0 New preserves-schemac invocation style 2021-12-13 13:44:02 +01:00
Tony Garnock-Jones 5f7d323af6 (cargo-release) version 0.15.1 2021-12-01 11:14:48 +01:00
Tony Garnock-Jones 07dacdc3be (cargo-release) version 0.10.1 2021-12-01 11:14:48 +01:00
Tony Garnock-Jones c7507e8730 (cargo-release) version 0.15.1 2021-12-01 11:14:48 +01:00
Tony Garnock-Jones 730fa2098b It is OK for an assertion to be placed at an unregistered remote_oid, it turns out 2021-12-01 11:14:02 +01:00
Tony Garnock-Jones 34c336e457 More tracing 2021-12-01 11:06:39 +01:00
Tony Garnock-Jones 11363c5776 If an actor panics, make sure to clean up in drop if we can 2021-12-01 11:06:29 +01:00
Tony Garnock-Jones 77a3ee4a31 Release 2021-11-17 08:49:29 +01:00
Tony Garnock-Jones f8ca9b9c89 Current-facet-handle expression 2021-11-17 08:45:56 +01:00
Tony Garnock-Jones 767c4bbe71 Bump preserves-schema dep 2021-11-17 08:45:56 +01:00
Tony Garnock-Jones ccb38c5641 Fix targets for release building 2021-11-14 15:57:12 +01:00
Tony Garnock-Jones 98a09f53e8 Release 0.14.1
syndicate-server@0.14.1

Generated by cargo-workspaces
2021-11-14 15:48:56 +01:00
Tony Garnock-Jones ce743fa934 Repair bug: environments should have symbol keys, not string keys 2021-11-14 15:47:12 +01:00
Tony Garnock-Jones 4deb9cbfcc Update deps 2021-11-13 13:39:10 +01:00
Tony Garnock-Jones a0e6ce0f4d x86_64-binary-release target 2021-11-13 13:37:53 +01:00
Tony Garnock-Jones 63e86efc38 (cargo-release) version 0.9.0 2021-11-12 12:34:21 +01:00
Tony Garnock-Jones 64ccf5c661 (cargo-release) version 0.14.0 2021-11-12 12:34:21 +01:00
Tony Garnock-Jones 212a5a11a3 (cargo-release) version 0.14.0 2021-11-12 12:34:21 +01:00
Tony Garnock-Jones 2ec35ad868 Process the rest of the turn even when an unknown oid is seen 2021-10-18 17:21:09 +02:00
Tony Garnock-Jones 13a0100ad8 Add OnStop (though I'm not sure about it as a permanent feature! The syntax is gross) 2021-10-13 12:13:19 +02:00
Tony Garnock-Jones d5f14ab761 Makefile & Cross.toml hack to work around an aarch64 cross-compilation issue (https://github.com/rust-embedded/cross/issues/598) 2021-10-13 12:12:02 +02:00
Tony Garnock-Jones 50e55e3fca Use the localdev pattern 2021-10-08 18:14:56 +02:00
Tony Garnock-Jones 1c80b183f1 (cargo-release) version 0.13.0 2021-10-08 16:40:11 +02:00
Tony Garnock-Jones 49eeb2452d (cargo-release) version 0.8.0 2021-10-08 16:40:11 +02:00
Tony Garnock-Jones 6f18f728d6 (cargo-release) version 0.13.0 2021-10-08 16:40:11 +02:00
Tony Garnock-Jones 4713005997 wait_for_all_actors_to_stop 2021-10-08 16:37:26 +02:00
Tony Garnock-Jones baf98d6c54 Better span naming and logging tweaks 2021-10-08 16:37:17 +02:00
Tony Garnock-Jones 3c42b5eaeb Tweak logging 2021-10-07 22:21:38 +02:00
Tony Garnock-Jones e101258473 Message handling 2021-10-07 22:03:29 +02:00
Tony Garnock-Jones fb744082b9 Only include config files with names ending in .pr 2021-10-07 21:37:24 +02:00
Tony Garnock-Jones c51f6b2a4e Repair off-by-one in error message 2021-10-07 21:29:13 +02:00
Tony Garnock-Jones 733037f41b "timestamp" expression 2021-10-07 21:29:01 +02:00
Tony Garnock-Jones 0837606ca7 Message sending 2021-10-07 21:28:47 +02:00
Tony Garnock-Jones 3c106dcb86 Refine logging 2021-10-07 21:28:20 +02:00
Tony Garnock-Jones 2d31e86b05 Update configuration in run-server 2021-10-07 20:54:14 +02:00
Tony Garnock-Jones ac6f37cf0c Clean up error reporting 2021-10-07 18:10:59 +02:00
Tony Garnock-Jones 40025b90a6 More capability-oriented scripting language 2021-10-07 17:00:04 +02:00
Tony Garnock-Jones 0d7ac7441f stop() and stop_facet(facet_id) now return unit 2021-10-07 16:59:34 +02:00
Tony Garnock-Jones 7b6a2dab76 More interesting config interpreter 2021-10-06 22:03:12 +02:00
Tony Garnock-Jones f640111f20 Huh, I seem to have left this unfinished 2021-10-06 22:02:27 +02:00
Tony Garnock-Jones 97af85a024 Merge latest changes from the syndicate-protocols repository 2021-10-06 21:52:23 +02:00
Tony Garnock-Jones b42230b96a ServiceObject 2021-10-06 21:51:08 +02:00
Tony Garnock-Jones 7117215963 Binary and text support 2021-10-05 21:11:16 +02:00
Tony Garnock-Jones f74bc2e069 Remove unnecessary `use` clauses 2021-10-05 21:10:53 +02:00
Tony Garnock-Jones d87ff4f62f Step toward inferior syndicate processes 2021-10-05 19:10:46 +02:00
Tony Garnock-Jones 9af31cfaad More debug output 2021-10-05 19:10:30 +02:00
Tony Garnock-Jones 280d938cc0 Wait 0.1s instead of 1.0s on config file change 2021-10-05 19:09:32 +02:00
Tony Garnock-Jones 81dfae92d8 dirty-consumer, dirty-producer 2021-10-05 14:10:57 +02:00
Tony Garnock-Jones e214d9dce3 Tweak banner 2021-10-05 12:41:26 +02:00
Tony Garnock-Jones 2a7606d626 Track actors globally (eventually for reflection/introspection) 2021-10-05 12:39:28 +02:00
Tony Garnock-Jones 6fb1db4f6b Improve logging 2021-10-04 14:40:39 +02:00
Tony Garnock-Jones 5e3a497c32 First stab at service logging 2021-10-01 22:07:28 +02:00
Tony Garnock-Jones ea7e13b0c0 Begin teasing out general process specification schema 2021-09-30 16:02:39 +02:00
Tony Garnock-Jones b373d3440a Improve names used for definitions in externalServices.prs 2021-09-30 15:38:40 +02:00
Tony Garnock-Jones ed12c0883e Switch to parking_lot for another performance boost 2021-09-30 13:32:41 +02:00
Tony Garnock-Jones c252975a16 Bump again for a performance boost 2021-09-30 13:16:56 +02:00
Tony Garnock-Jones bb01227b08 Bump preserves versions 2021-09-30 13:10:01 +02:00
Tony Garnock-Jones de795219af Fix up daemon retry logic. Also: named fields; better stop logic.
In particular:

1. The root facet is considered inert even if it has outbound
assertions. This is because the only outbound assertion it can have is
a half-link to a peer actor, which shouldn't prevent the actor from
terminating normally if the user-level "root" facet stops.

2. On stop_facet_and_continue, parent-facet continuations execute
inline rather than at commit time. This is so that a user-level "root"
facet can *replace* itself. Remains to be properly exercised/tested.
2021-09-28 17:10:36 +02:00
Tony Garnock-Jones fe7086b84b More debug in counter.rs 2021-09-28 15:18:33 +02:00
Tony Garnock-Jones e8b7fbad0e Repair missing sync_and_adjust call 2021-09-28 15:17:43 +02:00
Tony Garnock-Jones 23fa6629df Cosmetic 2021-09-28 15:17:05 +02:00
Tony Garnock-Jones 982a258a8c Simplify examples 2021-09-28 13:00:48 +02:00
Tony Garnock-Jones 013e99af70 Greatly improve service lifecycle handling 2021-09-28 12:53:18 +02:00
Tony Garnock-Jones d02945c835 Merge latest changes from the syndicate-protocols repository 2021-09-27 13:57:32 +02:00
Tony Garnock-Jones 239b1b15cc Repair incorrect definition name 2021-09-27 13:57:12 +02:00
Tony Garnock-Jones 9078267e76 Fix typo 2021-09-27 13:56:12 +02:00
Tony Garnock-Jones 955177b7db Clarify action of `core-service` 2021-09-27 13:53:54 +02:00
Tony Garnock-Jones 9f3d3dbbc9 Merge latest changes from the syndicate-protocols repository 2021-09-27 13:50:41 +02:00
Tony Garnock-Jones b4b4995d84 Oops - wanted literals, but had refs instead 2021-09-27 13:50:29 +02:00
Tony Garnock-Jones 422904010b Refine approach to services 2021-09-27 13:48:26 +02:00
Tony Garnock-Jones a263a7091d Tweak debug outputs 2021-09-26 11:02:55 +02:00
Tony Garnock-Jones da3fa84fc0 Update preserves dep to 2.0.0 2021-09-25 11:20:30 +02:00
Tony Garnock-Jones d3d088418f Dependency tracking, milestones 2021-09-24 16:15:26 +02:00
Tony Garnock-Jones 5a8a508fdc More general on_stop; the old behaviour is now at on_stop_notify 2021-09-24 16:14:55 +02:00
Tony Garnock-Jones 5cfe2fd2e0 Use `enclose!` in box-and-client example 2021-09-24 16:14:24 +02:00
Tony Garnock-Jones ffae9be241 No more distinction between internal/external protocol variants 2021-09-24 13:04:15 +02:00
Tony Garnock-Jones 9adabddf54 Merge latest changes from the syndicate-protocols repository 2021-09-24 13:03:35 +02:00
Tony Garnock-Jones 6cfd97c91a Remove protocol variant complication (experimental) 2021-09-24 12:57:05 +02:00
Tony Garnock-Jones 770fb79882 Develop service model 2021-09-24 12:56:30 +02:00
Tony Garnock-Jones cc689686ae Armstrong Ring benchmark 2021-09-24 10:57:32 +02:00
Tony Garnock-Jones 2322ad6163 Remove unused ServiceDependency schema definition 2021-09-23 21:46:54 +02:00
Tony Garnock-Jones b81e936caf Use `enclose!` macro 2021-09-23 21:46:10 +02:00
Tony Garnock-Jones d8fa812bb1 Box-and-client dataflow example 2021-09-23 21:44:19 +02:00
Tony Garnock-Jones 531d66205b Intra-actor dataflow and fields; `enclose!` macro 2021-09-23 21:43:32 +02:00
Tony Garnock-Jones a92647b740 Signal running only once spawn has started 2021-09-20 23:32:53 +02:00
Tony Garnock-Jones 9f316ac659 Implement daemon service 2021-09-20 16:42:35 +02:00
Tony Garnock-Jones c87bfd8a2d More flexible env schema 2021-09-20 15:43:13 +02:00
Tony Garnock-Jones 988a22afde Retrieve daemon config 2021-09-20 15:43:00 +02:00
Tony Garnock-Jones 9a09cac5f7 Use `during!` macro in services 2021-09-20 15:10:31 +02:00
Tony Garnock-Jones d5b28097ef Wildcard pattern generation; reactivate daemon stub 2021-09-20 14:35:29 +02:00
Tony Garnock-Jones 01a47b2c76 Fix up during! macro 2021-09-19 20:36:44 +02:00
Tony Garnock-Jones ccd54be3b2 Adapt to new Preserves major version; stub daemon basis 2021-09-19 16:53:37 +02:00
Tony Garnock-Jones 3763b9ac86 (cargo-release) version 0.11.0 2021-09-10 12:43:25 +02:00
Tony Garnock-Jones 4bb01045d2 (cargo-release) version 0.6.0 2021-09-10 12:43:25 +02:00
Tony Garnock-Jones 6c72ed918a (cargo-release) version 0.11.0 2021-09-10 12:43:25 +02:00
Tony Garnock-Jones b5b1a6883c Repair reference-counting across membranes. 2021-09-08 13:11:54 +02:00
Tony Garnock-Jones 7aa67adfbf Use deserialize to avoid a bunch of useless work and code 2021-09-07 23:07:03 +02:00
Tony Garnock-Jones a7cb035b45 Make it possible to retract a handle from a non-current facet in the current actor 2021-09-07 19:12:32 +02:00
Tony Garnock-Jones 2cb72cd020 TODO 2021-09-07 17:28:53 +02:00
Tony Garnock-Jones 9f3b9cfd59 More docs 2021-09-04 17:48:22 +02:00
Tony Garnock-Jones 4af561537b Flow control documentation 2021-09-04 17:38:34 +02:00
Tony Garnock-Jones e3d1a0a43c Bump deps 2021-09-04 16:42:13 +02:00
Tony Garnock-Jones b7b225c9c8 Update README 2021-09-02 15:30:16 +02:00
Tony Garnock-Jones facef964c4 preserves 1.0.0 2021-09-02 11:17:07 +02:00
Tony Garnock-Jones 5f4f7d3a94 Bump deps 2021-09-02 11:10:32 +02:00
Tony Garnock-Jones 622115e13c Comment out my personal path overrides (!) 2021-09-01 19:38:56 +02:00
Tony Garnock-Jones e90fe2c41e Supervisor RestartPolicy 2021-09-01 17:31:01 +02:00
Tony Garnock-Jones 74ca267cef Move prevent_inert_check to During facet, where it is more generally useful 2021-08-31 17:01:43 +02:00
Tony Garnock-Jones fb6070d1cd Avoid spurious long-lived Account 2021-08-31 16:21:00 +02:00
Tony Garnock-Jones 2e232ca5b2 Structured pattern syntax (!) 2021-08-31 16:19:29 +02:00
Tony Garnock-Jones c6e9b613e1 Don't print errors on failed send_actions in EventBuffer::deliver. 2021-08-30 23:49:08 +02:00
Tony Garnock-Jones d8c3e37d17 Supervision; delayed actions; better tracing (incl `M: Debug`); linked task release 2021-08-30 23:41:51 +02:00
Tony Garnock-Jones 5861f91971 Entity::stop, Activation::on_stop 2021-08-30 14:17:40 +02:00
Tony Garnock-Jones 6757d0d4b5 (cargo-release) version 0.10.0 2021-08-30 13:24:56 +02:00
Tony Garnock-Jones dd0f7462b6 (cargo-release) version 0.5.0 2021-08-30 13:24:55 +02:00
Tony Garnock-Jones 7d70d98fe5 (cargo-release) version 0.10.0 2021-08-30 13:24:55 +02:00
Tony Garnock-Jones ea66959cf4 Only insert/replace content for a file if it was able to be read successfully 2021-08-30 13:24:00 +02:00
Tony Garnock-Jones 9b7febb8d7 ConfigWatcher 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones 18e77a87a5 Remove unneeded SERVICE_NAME constant in debt_reporter 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones 29967d76a4 Use tracing's macros for debug/display 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones 1266a80696 Improve core actor tracing/logging 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones 633b83412e Use tracing's macros for debug/display; enable dataspace debug 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones c0b73d3efa Remove unneeded (?) tokio features 2021-08-30 12:08:58 +02:00
Tony Garnock-Jones 989cc65d1c Fix doc links 2021-08-30 11:56:34 +02:00
Tony Garnock-Jones 8d2b5502be syndicate::convert::any_value 2021-08-30 11:56:26 +02:00
Tony Garnock-Jones f0e3e64ffb More logging 2021-08-28 18:55:08 +02:00
Tony Garnock-Jones 4292b06a93 No more default port 2021-08-28 18:55:02 +02:00
Tony Garnock-Jones 0f1432d414 Dynamic service instantiation 2021-08-28 18:50:55 +02:00
Tony Garnock-Jones c0b5623310 Merge latest changes from the syndicate-protocols repository 2021-08-28 15:35:58 +02:00
Tony Garnock-Jones 3200eb1f9a Move pull-protocols target to repo root 2021-08-28 15:35:54 +02:00
Tony Garnock-Jones 738ac3163a spawn_link; reactive debt_reporter service startup 2021-08-28 14:39:00 +02:00
Tony Garnock-Jones a252cfdfdf Introduce a facet immediately under the root facet for user code to run in, to allow something akin to replacement of the root facet 2021-08-27 23:38:51 +02:00
Tony Garnock-Jones cd951e18a0 Factor out gatekeeper::bind 2021-08-27 16:35:45 +02:00
Tony Garnock-Jones 0eff672c30 Split out initial services in syndicate-server 2021-08-27 16:19:14 +02:00
Tony Garnock-Jones f56c0df10f Facets! 2021-08-27 15:31:18 +02:00
Tony Garnock-Jones ae46e42539 Move unused ascii art to a separate file 2021-08-27 13:41:13 +02:00
Tony Garnock-Jones ce6c46f1ae Remove actor next_task_id field 2021-08-26 12:39:08 +02:00
Tony Garnock-Jones 87338ce47a Move debt reporter into syndicate-server 2021-08-26 10:16:09 +02:00
Tony Garnock-Jones 1e12d73c50 Logging tweaks 2021-08-26 10:06:05 +02:00
Tony Garnock-Jones 50116462d2 "cross" build for x86_64-unknown-linux-musl 2021-08-25 22:17:53 +02:00
Tony Garnock-Jones 2cedd740a6 (cargo-release) version 0.9.2 2021-08-25 17:38:59 +02:00
Tony Garnock-Jones 2fbde4a7f2 Treat parent-link, if present, as non-daemon too 2021-08-25 17:38:41 +02:00
Tony Garnock-Jones bc5e4fa736 (cargo-release) version 0.9.1 2021-08-25 17:32:23 +02:00
Tony Garnock-Jones 03677d54d8 (cargo-release) version 0.4.1 2021-08-25 17:32:23 +02:00
Tony Garnock-Jones 716df86a98 (cargo-release) version 0.9.1 2021-08-25 17:32:23 +02:00
Tony Garnock-Jones 2658cedc4f Repair mistake: send logs to stderr instead of stdout 2021-08-25 17:31:48 +02:00
Tony Garnock-Jones 86e140ef2f (cargo-release) version 0.9.0 2021-08-25 16:30:58 +02:00
Tony Garnock-Jones 3f5a14470e (cargo-release) version 0.4.0 2021-08-25 16:30:58 +02:00
Tony Garnock-Jones 9941258b6a (cargo-release) version 0.9.0 2021-08-25 16:30:58 +02:00
Tony Garnock-Jones 1b9d5ef426 Fix up dev release version mismatches 2021-08-25 16:29:01 +02:00
Tony Garnock-Jones 0ad4f7fe56 "Inferior" mode 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones 051843b832 Configurable debt-reporter 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones d4f7988539 Strip release binaries 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones ab77736573 Move cross build stuff to root Makefile/root package 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones 8822fe7886 Latest preserves patch 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones 31242f14ca Repair websocket end-of-stream 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones 3cd6bd5e53 Republish preserves too 2021-08-25 16:27:31 +02:00
Tony Garnock-Jones 0ff8c2c872 Stdio transport 2021-08-19 18:17:51 -04:00
Tony Garnock-Jones c2de82a2b7 schemas/transportAddress.prs 2021-08-18 22:59:59 -04:00
Tony Garnock-Jones 37baf864a4 (cargo-release) version 0.8.0 2021-08-13 21:39:43 -04:00
Tony Garnock-Jones f580ac5a2b (cargo-release) version 0.3.0 2021-08-13 21:39:43 -04:00
Tony Garnock-Jones f747edbbfd (cargo-release) version 0.8.0 2021-08-13 21:39:43 -04:00
Tony Garnock-Jones cafffd5248 (cargo-release) version 0.3.0-alpha.1 2021-08-13 21:34:07 -04:00
Tony Garnock-Jones 9b7030845e (cargo-release) version 0.8.0-alpha.1 2021-08-13 21:34:07 -04:00
Tony Garnock-Jones 4ef155dd8f (cargo-release) version 0.8.0-alpha.1 2021-08-13 21:34:07 -04:00
Tony Garnock-Jones 085fd6735b More docs 2021-08-13 21:28:23 -04:00
Tony Garnock-Jones fe9c0325eb No need to expose these at top level 2021-08-13 21:28:15 -04:00
Tony Garnock-Jones 6a505a4150 More docs 2021-08-13 21:25:31 -04:00
Tony Garnock-Jones 2e2d5bfb5d Document dataspace.rs; remove "churn" field 2021-08-13 20:39:27 -04:00
Tony Garnock-Jones 4491873ac8 Docs 2021-08-13 20:16:12 -04:00
Tony Garnock-Jones aee65ea029 Finish actor.rs docs 2021-08-13 20:12:11 -04:00
Tony Garnock-Jones 931c4e5cd1 Some documentation; rename Debtor to Account 2021-08-13 15:51:11 -04:00
Tony Garnock-Jones 5a3a572dcf (cargo-release) version 0.8.0-alpha.0 2021-08-13 12:51:48 -04:00
Tony Garnock-Jones 2384b29754 (cargo-release) version 0.3.0-alpha.0 2021-08-13 12:51:18 -04:00
Tony Garnock-Jones f428aa363f (cargo-release) version 0.8.0-alpha.0 2021-08-13 12:50:40 -04:00
Tony Garnock-Jones 55b2cb8b1c Description, license etc. 2021-08-13 12:50:11 -04:00
Tony Garnock-Jones c4469dfa98 dev-scripts 2021-08-13 12:41:48 -04:00
Tony Garnock-Jones 550851646d Copy Makefile from preserves 2021-08-13 12:40:31 -04:00
Tony Garnock-Jones 17fea66291 Implement missing cases in syndicate_macros::pattern 2021-08-13 07:11:37 -04:00
Tony Garnock-Jones dc0a4cc1ab Be more proc_macro2-centric 2021-08-13 06:51:20 -04:00
Tony Garnock-Jones bb519b625b Allow variable labels in patterns (see pingpong.rs) 2021-08-13 06:43:34 -04:00
Tony Garnock-Jones 2255a54f1a Use syndicate-macros a little more 2021-08-13 00:02:05 -04:00
Tony Garnock-Jones 5bb665ef62 Use syndicate-macros crate in syndicate-server 2021-08-12 23:58:58 -04:00
Tony Garnock-Jones 82dd821d35 Default to binary (!) 2021-08-12 23:58:38 -04:00
Tony Garnock-Jones f0205f06ca Initial commit of syndicate-macros crate, still incomplete 2021-08-12 23:58:23 -04:00
Tony Garnock-Jones 4f30faa1ba Split out syndicate-server crate 2021-08-12 21:42:14 -04:00
Tony Garnock-Jones 37fd904210 First reorganisation of workspace into a ... workspace 2021-08-12 21:13:49 -04:00
Tony Garnock-Jones 02dae17a4b Stub preserves-schema plugin 2021-08-12 21:06:09 -04:00
Tony Garnock-Jones 0a9f4bd97a Bump preserves dep 2021-08-12 21:05:59 -04:00
Tony Garnock-Jones 6d563bfd91 Document name macro 2021-08-11 18:03:50 -04:00
Tony Garnock-Jones b14ebe6de5 Avoid needless schemas/mod.rs file 2021-08-11 18:03:43 -04:00
Tony Garnock-Jones 3be1ca28e7 Clean up ServerConfig 2021-08-11 17:48:04 -04:00
Tony Garnock-Jones 1b1df985a4 Clean up ActorId and Handle types and allocators 2021-08-11 17:40:48 -04:00
Tony Garnock-Jones 1154261062 Document bag.rs 2021-08-11 17:40:32 -04:00
Tony Garnock-Jones 5e5ee0bbdd Introduce "AnyValue", a better name for "internal_protocol::_Any" 2021-08-11 17:16:01 -04:00
Tony Garnock-Jones ca85f27fbc Remove unused republications 2021-08-11 16:29:51 -04:00
Tony Garnock-Jones e130744246 Update README 2021-08-11 16:22:53 -04:00
Tony Garnock-Jones 909356caa2 Remove name field from DBind 2021-08-11 16:12:00 -04:00
Tony Garnock-Jones a9de02577d Merge latest changes from the syndicate-protocols repository 2021-08-11 16:08:37 -04:00
Tony Garnock-Jones 9445a71b53 Use correct latest version of preserves-schemac 2021-08-11 16:05:36 -04:00
Tony Garnock-Jones f7a3f21300 Merge latest changes from the syndicate-protocols repository 2021-08-11 15:44:55 -04:00
Tony Garnock-Jones 8cd601a777 Remove name field from DBind 2021-08-11 15:43:29 -04:00
Tony Garnock-Jones 4464655a4f Final preserves version bump for today 2021-08-10 22:08:20 -04:00
Tony Garnock-Jones 4b872828a5 Use OUT_DIR 2021-08-10 22:07:48 -04:00
Tony Garnock-Jones 945a9fc7f7 New preserves releases 2021-08-10 10:54:05 -04:00
Tony Garnock-Jones ef5b53d52b Remove old peer code 2021-08-10 07:48:09 -04:00
Tony Garnock-Jones 9d81de8b7b Newline after each output text packet 2021-08-09 21:55:01 -04:00
Tony Garnock-Jones 54454e608b Binary/text autodetect 2021-08-09 10:02:45 -04:00
Tony Garnock-Jones 4db9511b12 Better unix logging 2021-08-09 10:02:32 -04:00
Tony Garnock-Jones 46d6d80b42 Unix socket listener 2021-08-09 09:19:00 -04:00
Tony Garnock-Jones 107a04f4c9 Preserves updated num dependency 2021-08-09 09:18:40 -04:00
Tony Garnock-Jones adfabadf7f Use literal byte syntax 2021-08-02 21:54:28 +02:00
Tony Garnock-Jones 09da10b299 Bump lockfile for new Preserves 2021-08-02 21:54:19 +02:00
Tony Garnock-Jones 2eca2a0cc1 Experimental: use Notify for Debtor credit flow 2021-07-27 16:31:00 +02:00
Tony Garnock-Jones ff130e9443 Now we are using Mutex instead of RwLock, we don't need to be Sync everywhere 2021-07-26 10:53:56 +02:00
Tony Garnock-Jones 73b7ad75bd RwLock -> Mutex 2021-07-25 23:12:07 +02:00
Tony Garnock-Jones 5b97628137 Cargo update 2021-07-25 18:48:08 +02:00
Tony Garnock-Jones 20539da63b Use recent shared-state changes to avoid scheduling overhead in relay.rs by activating the relay actor right from the input loop 2021-07-25 01:10:43 +02:00
Tony Garnock-Jones 35f510aa0b More fine-grained state and new ownership relations, to potentially permit avoiding scheduling overhead by directly entering an actor's runtime context 2021-07-24 23:22:01 +02:00
Tony Garnock-Jones 90bb32e38c Tweak names 2021-07-23 08:11:48 +02:00
Tony Garnock-Jones d550ba2705 Simplify 2021-07-23 08:10:09 +02:00
Tony Garnock-Jones 908ab08f4c Bump preserves versions 2021-07-22 16:58:31 +02:00
Tony Garnock-Jones d85b980834 Typed Refs (!). Decent speedup by avoiding marshalling 2021-07-22 16:53:56 +02:00
Tony Garnock-Jones f0a9894ee8 Merge latest changes from the syndicate-protocols repository 2021-07-22 14:13:15 +02:00
Tony Garnock-Jones 0b2c7ecfe1 Rename RefAny -> Cap 2021-07-22 14:12:53 +02:00
Tony Garnock-Jones a8289668df Merge latest changes from the syndicate-protocols repository 2021-07-22 13:43:43 +02:00
Tony Garnock-Jones be6b30bba6 Switch Ref -> RefAny 2021-07-22 13:43:14 +02:00
Tony Garnock-Jones 6c3f039026 Use u64 internally for assertion handles 2021-07-22 10:07:49 +02:00
Tony Garnock-Jones 4a69d5573f Actions as closures rather than data 2021-07-22 09:56:21 +02:00
Tony Garnock-Jones 21a69618cf Rearrange Entity storage: they are now held in Refs 2021-07-22 01:05:08 +02:00
Tony Garnock-Jones aa1755be0f Avoid needless translation of internal events 2021-07-21 23:53:55 +02:00
Tony Garnock-Jones 052da62572 Switch to preserves-schema deserialize; minor performance tweaks 2021-07-21 23:29:53 +02:00
Tony Garnock-Jones 8cf6ace5f6 Update Cargo.lock 2021-07-21 22:00:46 +02:00
Tony Garnock-Jones a1766875fb A really interesting and apparently effective approach to internal flow control 2021-07-15 13:13:22 +02:00
Tony Garnock-Jones 94fd0d3f14 Draw the rest of the bloody owl 2021-07-15 09:13:31 +02:00
Tony Garnock-Jones bc99dad13e Merge latest changes from the syndicate-protocols repository 2021-07-12 21:10:39 +02:00
Tony Garnock-Jones 993cf78a38 DeBruijn-like binding in patterns 2021-07-12 21:10:19 +02:00
Tony Garnock-Jones 432b7bdf05 Immediate self-messaging; flush message for relay 2021-07-12 17:41:12 +02:00
Tony Garnock-Jones d968eb34f2 Gatekeeper service etc. Still missing attenuations etc. But almost there! 2021-07-09 00:04:11 +02:00
Tony Garnock-Jones e5acc6a7a6 It seems the recursion_limit isn't needed at the moment 2021-07-06 20:57:49 +02:00
Tony Garnock-Jones 7fb20c11af It actually takes connections again now! Still WIP 2021-07-06 20:56:36 +02:00
Tony Garnock-Jones ede0e29370 A few days' work redoing syndicate-rs - still WIP 2021-07-03 09:04:03 +02:00
Tony Garnock-Jones 3e96fa87d4 Merge latest changes from the syndicate-protocols repository 2021-07-03 09:01:16 +02:00
Tony Garnock-Jones f7c6e7d164 Specify embedded type for sturdy.prs 2021-07-03 09:00:58 +02:00
Tony Garnock-Jones 64e4d9cb74 Merge latest changes from the syndicate-protocols repository 2021-07-02 16:51:47 +02:00
Tony Garnock-Jones e7ddfdf311 EntityRef.Ref in dataspacePatterns 2021-07-02 16:51:21 +02:00
Tony Garnock-Jones 142f84a428 Merge latest changes from the syndicate-protocols repository 2021-07-02 16:48:28 +02:00
Tony Garnock-Jones dab79020f4 Variations on protocol for internal and external use 2021-07-02 16:48:15 +02:00
Tony Garnock-Jones 4b6b637223 Merge latest changes from the syndicate-protocols repository 2021-07-02 10:12:29 +02:00
Tony Garnock-Jones a6639b5380 Error packets 2021-07-02 10:11:53 +02:00
Tony Garnock-Jones 89e7b31d02 Merge latest changes from the syndicate-protocols repository 2021-07-01 10:04:39 +02:00
Tony Garnock-Jones 06e922c511 Compiled schema bundle 2021-07-01 10:04:26 +02:00
Tony Garnock-Jones 4e46d4a381 make pull-protocols 2021-07-01 10:01:19 +02:00
Tony Garnock-Jones 94c63ef992 Add 'protocols/' from commit '93c196acaaf85e406f579a94489af5f1ade04ebd'
git-subtree-dir: protocols
git-subtree-mainline: 0c1080eb0b
git-subtree-split: 93c196acaa
2021-07-01 10:00:50 +02:00
Tony Garnock-Jones 93c196acaa Move schemas into subdirectory after subtree split 2021-07-01 09:51:53 +02:00
Tony Garnock-Jones e034486aaa Update schemas to match new identifier restrictions. 2021-06-25 09:45:38 +02:00
Tony Garnock-Jones 824b078eac Simpler stream connection protocol. 2021-06-18 13:48:12 +02:00
Tony Garnock-Jones cf93327ed6 Services and service activation 2021-06-17 14:57:06 +02:00
Tony Garnock-Jones 6cfe8c2ba4 `when` -> `on`; StreamConnection API; better `this-target`; tcp-listen errors
- spec-generic StreamConnection translators, for simple TCP API
 - `when` -> `on`, better use for event-expanders
 - Removal of special processing of `at`, making `this-target` properly lexically scopeable
 - TcpListenError and handling of tcp-listen errors
 - SYNDICATE_COLUMNS for pretty-printing of dataspace traces
 - Repair driver-support.rkt thread shutdown turn-taking
 - Refinements to stream protocols and implementation
 - Improvements to syntax location preservation in syntax.rkt
2021-06-17 13:38:30 +02:00
Tony Garnock-Jones b6bc816daf Split out experimental "stream" protocols; make tcp.rkt use them; more inertness checks
Also, a few other important changes:
 - Better printing of entity-ref structs
 - Inertness checks on assertion retraction (!) and preventer-disarm
 - Correct selection of active facet during dataflow recomputations
 - Repair silly omission in turn-assert/dataflow!
2021-06-16 21:44:07 +02:00
Tony Garnock-Jones a73b6a9f4a Whole-packet flow credit 2021-06-15 12:46:09 +02:00
Tony Garnock-Jones f6cb595709 Add ConnectionPeer assertions; rename TcpOutbound -> TcpRemote and TcpInbound -> TcpLocal 2021-06-15 12:37:14 +02:00
Tony Garnock-Jones afe36c630d Refactor/repair tcp.prs and tcp.rkt 2021-06-11 15:29:12 +02:00
Tony Garnock-Jones 5850c5b06d Credit-based flow control on tcp driver; line mode 2021-06-11 14:18:53 +02:00
Tony Garnock-Jones b0d0eb3a11 drivers/racket-event.rkt 2021-06-10 13:34:18 +02:00
Tony Garnock-Jones 21d09f81e5 ActiveSocket-close now gets a string, not an embedded exn 2021-06-10 13:33:16 +02:00
Tony Garnock-Jones 8b5e74048e Beginnings of a TCP driver 2021-06-10 10:00:43 +02:00
Tony Garnock-Jones 201f5433e1 Port timer driver from older syndicate/rkt implementation 2021-06-09 23:08:06 +02:00
Tony Garnock-Jones 8cbe2475e3 TAttenuate 2021-06-09 15:06:58 +02:00
Tony Garnock-Jones 930f7eda00 Move box-protocol to a #lang preserves-schema module 2021-06-03 23:22:46 +02:00
Tony Garnock-Jones a932fa1428 Pattern decomposition 2021-06-03 15:58:48 +02:00
Tony Garnock-Jones 3412eabcff Update schemas for new embedded syntax; steps toward pattern support 2021-06-02 06:57:48 +02:00
Tony Garnock-Jones e47a37e3f0 First steps to an actual novy implementation 2021-05-27 10:36:35 +02:00
128 changed files with 14952 additions and 2777 deletions

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
/target
**/*.rs.bk
localdev/
scratch/

2101
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,49 +1,34 @@
[package]
name = "syndicate-rs"
version = "0.1.0"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
cargo-features = ["strip"]
[patch.crates-io]
preserves = { path = "/home/tonyg/src/preserves/implementations/rust/preserves" }
[workspace]
members = [
"syndicate",
"syndicate-macros",
"syndicate-schema-plugin",
"syndicate-server",
"syndicate-tools",
]
[features]
vendored-openssl = ["openssl/vendored"]
# [patch.crates-io]
# #
# # Use a bind mount for localdev:
# #
# # mkdir localdev
# # sudo mount --bind /home/tonyg/src localdev
# #
# preserves = { path = "localdev/preserves/implementations/rust/preserves" }
# preserves-schema = { path = "localdev/preserves/implementations/rust/preserves-schema" }
[profile.release]
strip = true
# debug = true
# lto = true
[profile.bench]
debug = true
lto = true
[lib]
name = "syndicate"
[dependencies]
preserves = "0.13.0"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_bytes = "0.11"
tokio = { version = "0.2.21", features = ["macros", "rt-threaded", "sync", "dns", "tcp", "time", "stream"] }
tokio-util = { version = "0.3.1", features = ["codec"] }
bytes = "0.5.4"
futures = "0.3.5"
structopt = "0.3.14"
tungstenite = "0.10.1"
tokio-tungstenite = "0.10.1"
tracing = "0.1.14"
tracing-subscriber = "0.2.5"
tracing-futures = "0.2.4"
# Only used for vendored-openssl, which in turn is being used for cross-builds
openssl = { version = "0.10", optional = true }
[dev-dependencies]
criterion = "0.3"
[[bench]]
name = "bench_dataspace"
harness = false
# [patch.crates-io]
# # Unfortunately, until [1] is fixed (perhaps via [2]), we have to use a patched proc-macro2.
# # [1]: https://github.com/dtolnay/proc-macro2/issues/402
# # [2]: https://github.com/dtolnay/proc-macro2/pull/407
# proc-macro2 = { git = "https://github.com/tonyg/proc-macro2", branch = "repair_span_start_end" }

6
Cross.toml Normal file
View File

@ -0,0 +1,6 @@
[build.env]
# Both of these are needed to workaround https://github.com/rust-embedded/cross/issues/598
passthrough = [
"RUSTFLAGS",
"CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS",
]

View File

@ -1,24 +1,41 @@
# cargo install cargo-watch
watch:
cargo watch -c -x check -x 'test -- --nocapture'
# Use cargo release to manage publication and versions etc.
#
# cargo install cargo-release
run-watch:
RUST_BACKTRACE=1 cargo watch -c -x 'build --all-targets' -x 'run'
clippy-watch:
cargo watch -c -x clippy
inotifytest:
inotifytest sh -c 'reset; cargo build && RUST_BACKTRACE=1 cargo test -- --nocapture'
binary: binary-release
binary-release:
cargo build --release --all-targets
binary-debug:
all:
cargo build --all-targets
test:
cargo test
test-all:
cargo test --all-targets
ws-bump:
cargo workspaces version \
--no-global-tag \
--individual-tag-prefix '%n-v' \
--allow-branch 'main' \
$(BUMP_ARGS)
ws-publish:
cargo workspaces publish \
--from-git
PROTOCOLS_BRANCH=main
pull-protocols:
git subtree pull -P syndicate/protocols \
-m 'Merge latest changes from the syndicate-protocols repository' \
git@git.syndicate-lang.org:syndicate-lang/syndicate-protocols \
$(PROTOCOLS_BRANCH)
static: static-x86_64
static-%:
CARGO_TARGET_DIR=target/target.$* cross build --target $*-unknown-linux-musl --features vendored-openssl,jemalloc
###########################################################################
# OK, rather than doing it myself (per
# https://eighty-twenty.org/2019/10/15/cross-compiling-rust), it turns
# out past a certain level of complexity we need more than just a
@ -34,10 +51,30 @@ binary-debug:
# etc, ready on my system despite being otherwise able to rely on
# cross. I think. It's a bit confusing.
arm-binary: arm-binary-release
x86_64-binary: x86_64-binary-release
arm-binary-release:
cross build --target=armv7-unknown-linux-musleabihf --release --all-targets --features vendored-openssl
x86_64-binary-release:
CARGO_TARGET_DIR=target/target.x86_64 cross build --target x86_64-unknown-linux-musl --release --all-targets --features vendored-openssl,jemalloc
arm-binary-debug:
cross build --target=armv7-unknown-linux-musleabihf --all-targets --features vendored-openssl
x86_64-binary-debug:
CARGO_TARGET_DIR=target/target.x86_64 cross build --target x86_64-unknown-linux-musl --all-targets --features vendored-openssl
armv7-binary: armv7-binary-release
armv7-binary-release:
CARGO_TARGET_DIR=target/target.armv7 cross build --target=armv7-unknown-linux-musleabihf --release --all-targets --features vendored-openssl
armv7-binary-debug:
CARGO_TARGET_DIR=target/target.armv7 cross build --target=armv7-unknown-linux-musleabihf --all-targets --features vendored-openssl
# As of 2023-05-12 (and probably earlier!) this is no longer required with current Rust nightlies
# # Hack to workaround https://github.com/rust-embedded/cross/issues/598
# HACK_WORKAROUND_ISSUE_598=CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS="-C link-arg=/usr/local/aarch64-linux-musl/lib/libc.a"
aarch64-binary: aarch64-binary-release
aarch64-binary-release:
CARGO_TARGET_DIR=target/target.aarch64 cross build --target=aarch64-unknown-linux-musl --release --all-targets --features vendored-openssl,jemalloc
aarch64-binary-debug:
CARGO_TARGET_DIR=target/target.aarch64 cross build --target=aarch64-unknown-linux-musl --all-targets --features vendored-openssl

View File

@ -2,28 +2,51 @@
A Rust implementation of:
- the Syndicated Actor model, including assertion-based
communication, failure-handling, capability-style security,
dataspace entities, and facets as a structuring principle;
- the Syndicate network protocol, including
- a high-speed Dataspace indexing structure (see
[HOWITWORKS.md](https://git.syndicate-lang.org/syndicate-lang/syndicate-rkt/src/commit/90c4c60699069b496491b81ee63b5a45ffd638cb/syndicate/HOWITWORKS.md)
from `syndicate-rkt`),
- a standalone Syndicate protocol *broker* service, and
- a handful of [examples](examples/).
- a high-speed Dataspace indexing structure
([`skeleton.rs`](syndicate/src/skeleton.rs); see also
[HOWITWORKS.md](https://git.syndicate-lang.org/syndicate-lang/syndicate-rkt/src/commit/90c4c60699069b496491b81ee63b5a45ffd638cb/syndicate/HOWITWORKS.md)
from `syndicate-rkt`) and
- a standalone Syndicate protocol "broker" service (roughly
comparable in scope and intent to D-Bus); and
- a handful of [example programs](syndicate-server/examples/).
![The Syndicate/rs server running.](syndicate-rs-server.png)
*The Syndicate/rs server running.*
## Quickstart
From docker or podman:
docker run -it --rm leastfixedpoint/syndicate-server /syndicate-server -p 8001
Build and run from source:
git clone https://git.syndicate-lang.org/syndicate-lang/syndicate-rs
cd syndicate-rs
cargo build --release
./target/release/syndicate-server
./target/release/syndicate-server -p 8001
If you have [`mold`](https://github.com/rui314/mold) available (`apt install mold`), you may be
able to get faster linking by creating `.cargo/config.toml` as follows:
[build]
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
Enabling the `jemalloc` feature can get a *substantial* (~20%-50%) improvement in throughput.
## Running the examples
In one window, start the server:
In one window, start the server with a basic configuration:
./target/release/syndicate-server
./target/release/syndicate-server -c dev-scripts/benchmark-config.pr
Then, choose one of the examples below.
@ -61,7 +84,7 @@ about who kicks off the pingpong session.
You may find better performance by restricting the server to fewer
cores than you have available. For example, for me, running
taskset -c 0,1 ./target/release/syndicate-server
taskset -c 0,1 ./target/release/syndicate-server -c dev-scripts/benchmark-config.pr
roughly *quadruples* throughput for a single producer/consumer pair,
roughly *doubles* throughput for a single producer/consumer pair,
on my 48-core AMD CPU.

View File

@ -1,184 +0,0 @@
use criterion::{criterion_group, criterion_main, Criterion};
use futures::Sink;
use std::mem::drop;
use std::pin::Pin;
use std::sync::{Arc, Mutex, atomic::{AtomicU64, Ordering}};
use std::task::{Context, Poll};
use std::thread;
use std::time::Instant;
use structopt::StructOpt;
use syndicate::peer::Peer;
use syndicate::{config, spaces, packets, value::{Value, IOValue}};
use tokio::runtime::Runtime;
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender};
use tracing::Level;
struct SinkTx<T> {
tx: Option<UnboundedSender<T>>,
}
impl<T> SinkTx<T> {
fn new(tx: UnboundedSender<T>) -> Self {
SinkTx { tx: Some(tx) }
}
}
impl<T> Sink<T> for SinkTx<T> {
type Error = packets::Error;
fn poll_ready(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Result<(), packets::Error>> {
Poll::Ready(Ok(()))
}
fn start_send(self: Pin<&mut Self>, v: T) -> Result<(), packets::Error> {
self.tx.as_ref().unwrap().send(v).map_err(|e| packets::Error::Message(e.to_string()))
}
fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Result<(), packets::Error>> {
Poll::Ready(Ok(()))
}
fn poll_close(mut self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Result<(), packets::Error>> {
(&mut self).tx = None;
Poll::Ready(Ok(()))
}
}
#[inline]
fn says(who: IOValue, what: IOValue) -> IOValue {
let mut r = Value::simple_record("Says", 2);
r.fields_vec_mut().push(who);
r.fields_vec_mut().push(what);
r.finish().wrap()
}
pub fn bench_pub(c: &mut Criterion) {
let filter = tracing_subscriber::filter::EnvFilter::from_default_env()
.add_directive(tracing_subscriber::filter::LevelFilter::INFO.into());
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_ansi(true)
.with_max_level(Level::TRACE)
.with_env_filter(filter)
.finish();
tracing::subscriber::set_global_default(subscriber)
.expect("Could not set tracing global subscriber");
c.bench_function("publication alone", |b| {
b.iter_custom(|iters| {
let no_args: Vec<String> = vec![];
let config = Arc::new(config::ServerConfig::from_iter(no_args.iter()));
let spaces = Arc::new(Mutex::new(spaces::Spaces::new()));
let (c2s_tx, c2s_rx) = unbounded_channel();
let (s2c_tx, _s2c_rx) = unbounded_channel();
let runtime_handle = thread::spawn(move || {
let mut rt = Runtime::new().unwrap();
rt.block_on(async {
Peer::new(0, c2s_rx, SinkTx::new(s2c_tx)).run(spaces, &config).await.unwrap();
})
});
c2s_tx.send(Ok(packets::C2S::Connect(Value::from("bench_pub").wrap()))).unwrap();
let turn = packets::C2S::Turn(vec![
packets::Action::Message(says(Value::from("bench_pub").wrap(),
Value::ByteString(vec![]).wrap()))]);
let start = Instant::now();
for _ in 0..iters {
c2s_tx.send(Ok(turn.clone())).unwrap();
}
drop(c2s_tx);
runtime_handle.join().unwrap();
start.elapsed()
})
});
c.bench_function("publication and subscription", |b| {
b.iter_custom(|iters| {
let no_args: Vec<String> = vec![];
let config = Arc::new(config::ServerConfig::from_iter(no_args.iter()));
let spaces = Arc::new(Mutex::new(spaces::Spaces::new()));
let turn_count = Arc::new(AtomicU64::new(0));
let (c2s_tx, c2s_rx) = unbounded_channel();
let c2s_tx = Arc::new(c2s_tx);
{
let c2s_tx = c2s_tx.clone();
c2s_tx.send(Ok(packets::C2S::Connect(Value::from("bench_pub").wrap()))).unwrap();
let discard: IOValue = Value::simple_record0("discard").wrap();
let capture: IOValue = Value::simple_record1("capture", discard).wrap();
c2s_tx.send(Ok(packets::C2S::Turn(vec![
packets::Action::Assert(Value::from(0).wrap(),
Value::simple_record1(
"observe",
says(Value::from("bench_pub").wrap(),
capture)).wrap())]))).unwrap();
// tracing::info!("Sending {} messages", iters);
let turn = packets::C2S::Turn(vec![
packets::Action::Message(says(Value::from("bench_pub").wrap(),
Value::ByteString(vec![]).wrap()))]);
for _ in 0..iters {
c2s_tx.send(Ok(turn.clone())).unwrap();
}
c2s_tx.send(Ok(packets::C2S::Turn(vec![
packets::Action::Clear(Value::from(0).wrap())]))).unwrap();
}
let start = Instant::now();
let runtime_handle = {
let turn_count = turn_count.clone();
let mut c2s_tx = Some(c2s_tx.clone());
thread::spawn(move || {
let mut rt = Runtime::new().unwrap();
rt.block_on(async move {
let (s2c_tx, mut s2c_rx) = unbounded_channel();
let consumer_handle = tokio::spawn(async move {
while let Some(p) = s2c_rx.recv().await {
// tracing::info!("Consumer got {:?}", &p);
match p {
packets::S2C::Ping() => (),
packets::S2C::Turn(actions) => {
for a in actions {
match a {
packets::Event::Msg(_, _) => {
turn_count.fetch_add(1, Ordering::Relaxed);
},
packets::Event::End(_) => {
c2s_tx.take();
}
_ => panic!("Unexpected action: {:?}", a),
}
}
},
_ => panic!("Unexpected packet: {:?}", p),
}
}
// tracing::info!("Consumer terminating");
});
Peer::new(0, c2s_rx, SinkTx::new(s2c_tx)).run(spaces, &config).await.unwrap();
consumer_handle.await.unwrap();
})
})
};
drop(c2s_tx);
runtime_handle.join().unwrap();
let elapsed = start.elapsed();
let actual_turns = turn_count.load(Ordering::SeqCst);
if actual_turns != iters {
panic!("Expected {}, got {} messages", iters, actual_turns);
}
elapsed
})
});
}
criterion_group!(publish, bench_pub);
criterion_main!(publish);

View File

@ -0,0 +1,3 @@
let ?root_ds = dataspace
<require-service <relay-listener <tcp "0.0.0.0" 9001> $gatekeeper>>
<bind <ref { oid: "syndicate" key: #x"" }> $root_ds #f>

View File

@ -0,0 +1,2 @@
#!/bin/sh
while true; do ../target/release/examples/dirty-consumer "$@"; sleep 2; done

View File

@ -0,0 +1,2 @@
#!/bin/sh
while true; do ../target/release/examples/dirty-producer "$@"; sleep 2; done

View File

@ -0,0 +1,2 @@
#!/bin/sh
while true; do ../target/release/examples/state-consumer "$@"; sleep 2; done

View File

@ -0,0 +1,2 @@
#!/bin/sh
while true; do ../target/release/examples/state-producer "$@"; sleep 2; done

7
dev-scripts/run-server Executable file
View File

@ -0,0 +1,7 @@
#!/bin/sh
TASKSET='taskset -c 0,1'
if [ $(uname -s) = 'Darwin' ]
then
TASKSET=
fi
make -C ../syndicate-server binary && exec $TASKSET ../target/release/syndicate-server -c benchmark-config.pr "$@"

1
docker/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
syndicate-server.*

6
docker/Dockerfile Normal file
View File

@ -0,0 +1,6 @@
FROM busybox
RUN mkdir /data
ARG TARGETARCH
COPY ./syndicate-server.$TARGETARCH /syndicate-server
EXPOSE 1
CMD ["/syndicate-server", "-c", "/data", "-p", "1"]

37
docker/Makefile Normal file
View File

@ -0,0 +1,37 @@
U=leastfixedpoint
I=syndicate-server
ARCHITECTURES:=amd64 arm arm64
SERVERS:=$(patsubst %,syndicate-server.%,$(ARCHITECTURES))
VERSION=$(shell ./syndicate-server.$(shell ./docker-architecture $$(uname -m)) --version | cut -d' ' -f2)
all:
.PHONY: all clean image push push-only
clean:
rm -f syndicate-server.*
-podman images -q $(U)/$(I) | sort -u | xargs podman rmi -f
image: $(SERVERS)
for A in $(ARCHITECTURES); do set -x; \
podman build --platform=linux/$$A \
-t $(U)/$(I):$(VERSION)-$$A \
-t $(U)/$(I):latest-$$A \
.; \
done
rm -f tmp.image
push: image push-only
push-only:
$(patsubst %,podman push $(U)/$(I):$(VERSION)-%;,$(ARCHITECTURES))
$(patsubst %,podman push $(U)/$(I):latest-%;,$(ARCHITECTURES))
podman rmi -f $(U)/$(I):$(VERSION) $(U)/$(I):latest
podman manifest create $(U)/$(I):$(VERSION) $(patsubst %,$(U)/$(I):$(VERSION)-%,$(ARCHITECTURES))
podman manifest create $(U)/$(I):latest $(patsubst %,$(U)/$(I):latest-%,$(ARCHITECTURES))
podman manifest push $(U)/$(I):$(VERSION)
podman manifest push $(U)/$(I):latest
syndicate-server.%:
make -C .. $$(./alpine-architecture $*)-binary-release
cp -a ../target/target.$$(./alpine-architecture $*)/$$(./alpine-architecture $*)-unknown-linux-musl*/release/syndicate-server $@

9
docker/README.md Normal file
View File

@ -0,0 +1,9 @@
# Docker images for syndicate-server
Build using podman:
apt install podman
and at least until the dependencies are fixed (?),
apt install uidmap slirp4netns

6
docker/alpine-architecture Executable file
View File

@ -0,0 +1,6 @@
#!/bin/sh
case $1 in
amd64) echo x86_64;;
arm) echo armv7;;
arm64) echo aarch64;;
esac

6
docker/docker-architecture Executable file
View File

@ -0,0 +1,6 @@
#!/bin/sh
case $1 in
x86_64) echo amd64;;
armv7) echo arm;;
aarch64) echo arm64;;
esac

View File

@ -0,0 +1,9 @@
version: "3"
services:
syndicate:
image: leastfixedpoint/syndicate-server
ports:
- "1:1"
volumes:
- "/etc/syndicate:/data"

View File

@ -1,61 +0,0 @@
#![recursion_limit = "256"]
use syndicate::{V, value::Value};
use syndicate::packets::{ClientCodec, C2S, S2C, Action};
use tokio::net::TcpStream;
use tokio_util::codec::Framed;
use futures::SinkExt;
use futures::StreamExt;
use futures::FutureExt;
use futures::select;
use core::time::Duration;
use tokio::time::interval;
#[inline]
fn says(who: V, what: V) -> V {
let mut r = Value::simple_record("Says", 2);
r.fields_vec_mut().push(who);
r.fields_vec_mut().push(what);
r.finish().wrap()
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let discard: V = Value::simple_record0("discard").wrap();
let capture: V = Value::simple_record1("capture", discard).wrap();
let mut frames = Framed::new(TcpStream::connect("127.0.0.1:8001").await?, ClientCodec::new());
frames.send(C2S::Connect(Value::from("chat").wrap())).await?;
frames.send(
C2S::Turn(vec![Action::Assert(
Value::from(0).wrap(),
Value::simple_record1("observe", says(capture.clone(), capture)).wrap())]))
.await?;
let mut stats_timer = interval(Duration::from_secs(1));
let mut turn_counter = 0;
let mut event_counter = 0;
loop {
select! {
_instant = stats_timer.next().boxed().fuse() => {
print!("{:?} turns, {:?} events in the last second\n", turn_counter, event_counter);
turn_counter = 0;
event_counter = 0;
},
frame = frames.next().boxed().fuse() => match frame {
None => return Ok(()),
Some(res) => match res? {
S2C::Err(msg, _) => return Err(msg.into()),
S2C::Turn(es) => {
// print!("{:?}\n", es);
turn_counter = turn_counter + 1;
event_counter = event_counter + es.len();
},
S2C::Ping() => frames.send(C2S::Pong()).await?,
S2C::Pong() => (),
}
},
}
}
}

View File

@ -1,193 +0,0 @@
#![recursion_limit = "512"]
use core::time::Duration;
use futures::FutureExt;
use futures::SinkExt;
use futures::StreamExt;
use futures::select;
use std::time::{SystemTime, SystemTimeError};
use structopt::StructOpt;
use tokio::net::TcpStream;
use tokio::time::interval;
use tokio_util::codec::Framed;
use syndicate::packets::{ClientCodec, C2S, S2C, Action, Event};
use syndicate::value::{NestedValue, Value, IOValue};
#[derive(Clone, Debug, StructOpt)]
pub struct PingConfig {
#[structopt(short = "t", default_value = "1")]
turn_count: u32,
#[structopt(short = "a", default_value = "1")]
action_count: u32,
#[structopt(short = "l", default_value = "0")]
report_latency_every: usize,
#[structopt(short = "b", default_value = "0")]
bytes_padding: usize,
}
#[derive(Clone, Debug, StructOpt)]
pub enum PingPongMode {
Ping(PingConfig),
Pong,
}
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(subcommand)]
mode: PingPongMode,
#[structopt(default_value = "pingpong")]
dataspace: String,
}
fn now() -> Result<u64, SystemTimeError> {
Ok(SystemTime::now().duration_since(SystemTime::UNIX_EPOCH)?.as_nanos() as u64)
}
fn simple_record2(label: &str, v1: IOValue, v2: IOValue) -> IOValue {
let mut r = Value::simple_record(label, 2);
r.fields_vec_mut().push(v1);
r.fields_vec_mut().push(v2);
r.finish().wrap()
}
fn report_latencies(rtt_ns_samples: &Vec<u64>) {
let n = rtt_ns_samples.len();
let rtt_0 = rtt_ns_samples[0];
let rtt_50 = rtt_ns_samples[n * 1 / 2];
let rtt_90 = rtt_ns_samples[n * 90 / 100];
let rtt_95 = rtt_ns_samples[n * 95 / 100];
let rtt_99 = rtt_ns_samples[n * 99 / 100];
let rtt_99_9 = rtt_ns_samples[n * 999 / 1000];
let rtt_99_99 = rtt_ns_samples[n * 9999 / 10000];
let rtt_max = rtt_ns_samples[n - 1];
println!("rtt: 0% {:05.5}ms, 50% {:05.5}ms, 90% {:05.5}ms, 95% {:05.5}ms, 99% {:05.5}ms, 99.9% {:05.5}ms, 99.99% {:05.5}ms, max {:05.5}ms",
rtt_0 as f64 / 1000000.0,
rtt_50 as f64 / 1000000.0,
rtt_90 as f64 / 1000000.0,
rtt_95 as f64 / 1000000.0,
rtt_99 as f64 / 1000000.0,
rtt_99_9 as f64 / 1000000.0,
rtt_99_99 as f64 / 1000000.0,
rtt_max as f64 / 1000000.0);
println!("msg: 0% {:05.5}ms, 50% {:05.5}ms, 90% {:05.5}ms, 95% {:05.5}ms, 99% {:05.5}ms, 99.9% {:05.5}ms, 99.99% {:05.5}ms, max {:05.5}ms",
rtt_0 as f64 / 2000000.0,
rtt_50 as f64 / 2000000.0,
rtt_90 as f64 / 2000000.0,
rtt_95 as f64 / 2000000.0,
rtt_99 as f64 / 2000000.0,
rtt_99_9 as f64 / 2000000.0,
rtt_99_99 as f64 / 2000000.0,
rtt_max as f64 / 2000000.0);
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::from_args();
let (send_label, recv_label, report_latency_every, should_echo, bytes_padding) =
match config.mode {
PingPongMode::Ping(ref c) =>
("Ping", "Pong", c.report_latency_every, false, c.bytes_padding),
PingPongMode::Pong =>
("Pong", "Ping", 0, true, 0),
};
let mut frames = Framed::new(TcpStream::connect("127.0.0.1:8001").await?, ClientCodec::new());
frames.send(C2S::Connect(Value::from(config.dataspace).wrap())).await?;
let discard: IOValue = Value::simple_record0("discard").wrap();
let capture: IOValue = Value::simple_record1("capture", discard).wrap();
let pat: IOValue = simple_record2(recv_label, capture.clone(), capture);
frames.send(
C2S::Turn(vec![Action::Assert(
Value::from(0).wrap(),
Value::simple_record1("observe", pat).wrap())]))
.await?;
let padding: IOValue = Value::ByteString(vec![0; bytes_padding]).wrap();
let mut stats_timer = interval(Duration::from_secs(1));
let mut turn_counter = 0;
let mut event_counter = 0;
let mut current_rec: IOValue = simple_record2(send_label,
Value::from(0).wrap(),
padding.clone());
if let PingPongMode::Ping(ref c) = config.mode {
for _ in 0..c.turn_count {
let mut actions = vec![];
current_rec = simple_record2(send_label,
Value::from(now()?).wrap(),
padding.clone());
for _ in 0..c.action_count {
actions.push(Action::Message(current_rec.clone()));
}
frames.send(C2S::Turn(actions)).await?;
}
}
let mut rtt_ns_samples: Vec<u64> = vec![0; report_latency_every];
let mut rtt_batch_count = 0;
loop {
select! {
_instant = stats_timer.next().boxed().fuse() => {
print!("{:?} turns, {:?} events in the last second\n", turn_counter, event_counter);
turn_counter = 0;
event_counter = 0;
},
frame = frames.next().boxed().fuse() => match frame {
None => return Ok(()),
Some(res) => match res? {
S2C::Err(msg, _) => return Err(msg.into()),
S2C::Turn(events) => {
turn_counter = turn_counter + 1;
event_counter = event_counter + events.len();
let mut actions = vec![];
let mut have_sample = false;
for e in events {
match e {
Event::Msg(_, captures) => {
if should_echo || (report_latency_every == 0) {
actions.push(Action::Message(
simple_record2(send_label,
captures[0].clone(),
captures[1].clone())));
} else {
if !have_sample {
let rtt_ns = now()? - captures[0].value().to_u64()?;
rtt_ns_samples[rtt_batch_count] = rtt_ns;
rtt_batch_count = rtt_batch_count + 1;
if rtt_batch_count == report_latency_every {
rtt_ns_samples.sort();
report_latencies(&rtt_ns_samples);
rtt_batch_count = 0;
}
have_sample = true;
current_rec = simple_record2(send_label,
Value::from(now()?).wrap(),
padding.clone());
}
actions.push(Action::Message(current_rec.clone()));
}
}
_ =>
()
}
}
frames.send(C2S::Turn(actions)).await?;
},
S2C::Ping() => frames.send(C2S::Pong()).await?,
S2C::Pong() => (),
}
},
}
}
}

View File

@ -1,59 +0,0 @@
use futures::{SinkExt, StreamExt, poll};
use std::task::Poll;
use structopt::StructOpt;
use tokio::net::TcpStream;
use tokio_util::codec::Framed;
use syndicate::packets::{ClientCodec, C2S, S2C, Action};
use syndicate::value::{Value, IOValue};
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "a", default_value = "1")]
action_count: u32,
#[structopt(short = "b", default_value = "0")]
bytes_padding: usize,
}
#[inline]
fn says(who: IOValue, what: IOValue) -> IOValue {
let mut r = Value::simple_record("Says", 2);
r.fields_vec_mut().push(who);
r.fields_vec_mut().push(what);
r.finish().wrap()
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::from_args();
let mut frames = Framed::new(TcpStream::connect("127.0.0.1:8001").await?, ClientCodec::new());
frames.send(C2S::Connect(Value::from("chat").wrap())).await?;
let padding: IOValue = Value::ByteString(vec![0; config.bytes_padding]).wrap();
loop {
let mut actions = vec![];
for _ in 0..config.action_count {
actions.push(Action::Message(says(Value::from("producer").wrap(),
padding.clone())));
}
frames.send(C2S::Turn(actions)).await?;
loop {
match poll!(frames.next()) {
Poll::Pending => break,
Poll::Ready(None) => {
print!("Server closed connection");
return Ok(());
}
Poll::Ready(Some(res)) => {
let p = res?;
print!("{:?}\n", p);
if let S2C::Ping() = p { frames.send(C2S::Pong()).await? }
}
}
}
}
}

View File

@ -1,76 +0,0 @@
#![recursion_limit = "256"]
use syndicate::{V, value::Value};
use syndicate::packets::{ClientCodec, C2S, S2C, Action, Event};
use tokio::net::TcpStream;
use tokio_util::codec::Framed;
use futures::SinkExt;
use futures::StreamExt;
use futures::FutureExt;
use futures::select;
use core::time::Duration;
use tokio::time::interval;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let discard: V = Value::simple_record0("discard").wrap();
let capture: V = Value::simple_record1("capture", discard).wrap();
let mut frames = Framed::new(TcpStream::connect("127.0.0.1:8001").await?, ClientCodec::new());
frames.send(C2S::Connect(Value::from("chat").wrap())).await?;
frames.send(
C2S::Turn(vec![Action::Assert(
Value::from(0).wrap(),
Value::simple_record1("observe",
Value::simple_record1("Present", capture).wrap()).wrap())]))
.await?;
let mut stats_timer = interval(Duration::from_secs(1));
let mut turn_counter = 0;
let mut event_counter = 0;
let mut arrival_counter = 0;
let mut departure_counter = 0;
let mut occupancy = 0;
loop {
select! {
_instant = stats_timer.next().boxed().fuse() => {
print!("{:?} turns, {:?} events, {:?} arrivals, {:?} departures, {:?} present in the last second\n",
turn_counter,
event_counter,
arrival_counter,
departure_counter,
occupancy);
turn_counter = 0;
event_counter = 0;
arrival_counter = 0;
departure_counter = 0;
},
frame = frames.next().boxed().fuse() => match frame {
None => return Ok(()),
Some(res) => match res? {
S2C::Err(msg, _) => return Err(msg.into()),
S2C::Turn(events) => {
turn_counter = turn_counter + 1;
event_counter = event_counter + events.len();
for e in events {
match e {
Event::Add(_, _) => {
arrival_counter = arrival_counter + 1;
occupancy = occupancy + 1;
},
Event::Del(_, _) => {
departure_counter = departure_counter + 1;
occupancy = occupancy - 1;
},
_ => ()
}
}
},
S2C::Ping() => frames.send(C2S::Pong()).await?,
S2C::Pong() => (),
}
},
}
}
}

View File

@ -1,49 +0,0 @@
use futures::{SinkExt, StreamExt, poll};
use std::task::Poll;
use tokio::net::TcpStream;
use tokio_util::codec::Framed;
use syndicate::packets::{ClientCodec, C2S, S2C, Action, Event};
use syndicate::value::Value;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut frames = Framed::new(TcpStream::connect("127.0.0.1:8001").await?, ClientCodec::new());
frames.send(C2S::Connect(Value::from("chat").wrap())).await?;
let present_action = Action::Assert(
Value::from(0).wrap(),
Value::simple_record1("Present", Value::from(std::process::id()).wrap()).wrap());
let absent_action = Action::Clear(
Value::from(0).wrap());
frames.send(C2S::Turn(vec![present_action.clone()])).await?;
loop {
frames.send(C2S::Turn(vec![absent_action.clone()])).await?;
frames.send(C2S::Turn(vec![present_action.clone()])).await?;
loop {
match poll!(frames.next()) {
Poll::Pending => break,
Poll::Ready(None) => {
print!("Server closed connection");
return Ok(());
}
Poll::Ready(Some(res)) => {
match res? {
S2C::Turn(events) => {
for e in events {
match e {
Event::End(_) => (),
_ => println!("{:?}", e),
}
}
}
S2C::Ping() => frames.send(C2S::Pong()).await?,
p => println!("{:?}", p),
}
}
}
}
}
}

12
fixtags.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/sh
buildtag() {
name=$(grep '^name' "$1" | head -1 | sed -e 's:^.*"\([^"]*\)":\1:')
version=$(grep '^version' "$1" | head -1 | sed -e 's:^.*"\([^"]*\)":\1:')
echo "$name-v$version"
}
git tag "$(buildtag syndicate/Cargo.toml)"
git tag "$(buildtag syndicate-macros/Cargo.toml)"
git tag "$(buildtag syndicate-server/Cargo.toml)"
git tag "$(buildtag syndicate-tools/Cargo.toml)"

152
gatekeeper-config.pr Normal file
View File

@ -0,0 +1,152 @@
# We will create a TCP listener on port 9222, which speaks unencrypted
# protocol and allows interaction with the default/system gatekeeper, which
# has a single noise binding for introducing encrypted interaction with a
# *second* gatekeeper, which finally allows resolution of references to
# other objects.
# First, build a space where we place bindings for the inner gatekeeper to
# expose.
let ?inner-bindings = dataspace
# Next, start the inner gatekeeper.
<require-service <gatekeeper $inner-bindings>>
? <service-object <gatekeeper $inner-bindings> ?inner-gatekeeper> [
# Expose it via a noise binding at the outer/system gatekeeper.
<bind <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
secretKey: #[qLkyuJw/K4yobr4XVKExbinDwEx9QTt9PfDWyx14/kg],
service: world }>
$inner-gatekeeper #f>
]
# Now, expose the outer gatekeeper to the world, via TCP. The system
# gatekeeper is a primordial syndicate-server object bound to $gatekeeper.
<require-service <relay-listener <tcp "0.0.0.0" 9222> $gatekeeper>>
# Finally, let's expose some behaviour accessible via the inner gatekeeper.
#
# We will create a service dataspace called $world.
let ?world = dataspace
# Running `syndicate-macaroon mint --oid a-service --phrase hello` yields:
#
# <ref {oid: a-service, sig: #[JTTGQeYCgohMXW/2S2XH8g]}>
#
# That's a root capability for the service. We use the corresponding
# sturdy.SturdyDescriptionDetail to bind it to $world.
#
$inner-bindings += <bind <ref {oid: a-service, key: #"hello"}>
$world #f>
# Now, we can hand out paths to our services involving an initial noise
# step and a subsequent sturdyref/macaroon step.
#
# For example, running `syndicate-macaroon` like this:
#
# syndicate-macaroon mint --oid a-service --phrase hello \
# --caveat '<rewrite <bind <_>> <rec labelled [<lit "alice"> <ref 0>]>>'
#
# generates
#
# <ref {caveats: [<rewrite <bind <_>> <rec labelled [<lit "alice">, <ref 0>]>>],
# oid: a-service,
# sig: #[CXn7+rAoO3Xr6Y6Laap3OA]}>
#
# which is an attenuation of the root capability we bound that wraps all
# assertions and messages in a `<labelled "alice" _>` wrapper.
#
# All together, the `gatekeeper.Route` that Alice would use would be
# something like:
#
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
# service: world }>
# <ref { caveats: [<rewrite <bind <_>> <rec labelled [<lit "alice">, <ref 0>]>>],
# oid: a-service,
# sig: #[CXn7+rAoO3Xr6Y6Laap3OA] }>>
#
# Here's one for "bob":
#
# syndicate-macaroon mint --oid a-service --phrase hello \
# --caveat '<rewrite <bind <_>> <rec labelled [<lit "bob"> <ref 0>]>>'
#
# <ref {caveats: [<rewrite <bind <_>> <rec labelled [<lit "bob">, <ref 0>]>>],
# oid: a-service,
# sig: #[/75BbF77LOiqNcvpzNHf0g]}>
#
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
# service: world }>
# <ref { caveats: [<rewrite <bind <_>> <rec labelled [<lit "bob">, <ref 0>]>>],
# oid: a-service,
# sig: #[/75BbF77LOiqNcvpzNHf0g] }>>
#
# We relay labelled to unlabelled information, enacting a chat protocol
# that enforces usernames.
$world [
# Assertions of presence have the username wiped out and replaced with the label.
? <labelled ?who <Present _>> <Present $who>
# Likewise utterance messages.
?? <labelled ?who <Says _ ?what>> ! <Says $who $what>
# We allow anyone to subscribe to presence and utterances.
? <labelled _ <Observe <rec Present ?p> ?o>> <Observe <rec Present $p> $o>
? <labelled _ <Observe <rec Says ?p> ?o>> <Observe <rec Says $p> $o>
]
# We can also use sturdyref rewrites to directly handle `Says` and
# `Present` values, rather than wrapping with `<labelled ...>` and
# unwrapping using the script fragment just above.
#
# The multiply-quoted patterns in the `Observe` cases start to get unwieldy
# at this point!
#
# For Alice:
#
# syndicate-macaroon mint --oid a-service --phrase hello --caveat '<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
# <rewrite <rec Says [<_> <bind String>]> <rec Says [<lit "alice"> <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present> <_>]> <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says> <_>]> <_>]>> <ref 0>>
# ]>'
#
# <ref { oid: a-service sig: #[s918Jk6As8AWJ9rtozOTlg] caveats: [<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "alice">, <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>
#
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
# service: world }>
# <ref { oid: a-service sig: #[s918Jk6As8AWJ9rtozOTlg] caveats: [<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "alice">]>>
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "alice">, <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>>
#
# For Bob:
#
# syndicate-macaroon mint --oid a-service --phrase hello --caveat '<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
# <rewrite <rec Says [<_> <bind String>]> <rec Says [<lit "bob"> <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present> <_>]> <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says> <_>]> <_>]>> <ref 0>>
# ]>'
#
# <ref { oid: a-service sig: #[QBbV4LrS0i3BG6OyCPJl+A] caveats: [<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "bob">, <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>
#
# <route [<ws "wss://generic-dataspace.demo.leastfixedpoint.com/">]
# <noise { key: #[z1w/OLy0wi3Veyk8/D+2182YxcrKpgc8y0ZJEBDrmWs],
# service: world }>
# <ref { oid: a-service sig: #[QBbV4LrS0i3BG6OyCPJl+A] caveats: [<or [
# <rewrite <rec Present [<_>]> <rec Present [<lit "bob">]>>
# <rewrite <rec Says [<_>, <bind String>]> <rec Says [<lit "bob">, <ref 0>]>>
# <rewrite <bind <rec Observe [<rec rec [<lit Present>, <_>]>, <_>]>> <ref 0>>
# <rewrite <bind <rec Observe [<rec rec [<lit Says>, <_>]>, <_>]>> <ref 0>> ]>]}>>

65
http-config.pr Normal file
View File

@ -0,0 +1,65 @@
# We use $root_ds as the httpd space.
let ?root_ds = dataspace
# Supplying $root_ds as the last parameter in this relay-listener enables httpd service.
<require-service <relay-listener <tcp "0.0.0.0" 9001> $gatekeeper $root_ds>>
# Regular gatekeeper stuff works too.
<bind <ref { oid: "syndicate" key: #x"" }> $root_ds #f>
# Create an httpd router monitoring $root_ds for requests and bind requests.
<require-service <http-router $root_ds>>
# Create a static file server. When it gets a request, it ignores the first n (here, 1)
# elements of the path, and takes the remainder as relative to its configured directory (here,
# ".").
#
<require-service <http-static-files "." 1>>
#
# It publishes a service object: requests should be asserted to this.
# The http-bind record establishes this mapping.
#
? <service-object <http-static-files "." 1> ?handler> [
$root_ds += <http-bind #f 9001 get ["files" ...] $handler>
]
# Separately, bind path /d to $index, and respond there.
#
let ?index = dataspace
$root_ds += <http-bind #f 9001 get ["d"] $index>
$index ? <request _ ?k> [
$k ! <status 200 "OK">
$k ! <header content-type "text/html">
$k ! <chunk "<!DOCTYPE html>">
$k ! <done "<html><body>D</body></html>">
]
# Similarly, bind three paths, /d, /e and /t to $index2
# Because /d doubles up, the httpd router gives a warning when it is accessed.
# Accessing /e works fine.
# Accessing /t results in wasted work because of the hijacking listeners below.
#
let ?index2 = dataspace
$root_ds += <http-bind #f 9001 get ["d"] $index2>
$root_ds += <http-bind #f 9001 get ["e"] $index2>
$root_ds += <http-bind #f 9001 get ["t"] $index2>
$index2 ? <request _ ?k> [
$k ! <status 200 "OK">
$k ! <header content-type "text/html">
$k ! <chunk "<!DOCTYPE html>">
$k ! <done "<html><body>D2</body></html>">
]
# These two hijack /t by listening for raw incoming requests the same way the httpd router
# does. They respond quicker and so win the race. The httpd router's responses are lost.
#
$root_ds ? <request <http-request _ _ _ get ["t"] _ _ _> ?k> [
$k ! <status 200 "OK">
$k ! <header content-type "text/html">
$k ! <done "<html><body>T</body></html>">
]
$root_ds ? <request <http-request _ _ _ get ["t"] _ _ _> ?k> [
$k ! <status 200 "OK">
$k ! <header content-type "text/html">
$k ! <done "<html><body>T2</body></html>">
]

4
rustup-and-install.sh Executable file
View File

@ -0,0 +1,4 @@
#!/bin/sh
set -e
rustup update
cargo +nightly install --path `pwd`/syndicate-server

View File

@ -1,204 +0,0 @@
use syndicate::{config, spaces, packets, ConnId};
use syndicate::peer::Peer;
use std::sync::{Mutex, Arc};
use futures::{SinkExt, StreamExt};
use tracing::{Level, error, info, trace};
use tracing_futures::Instrument;
use tokio::net::TcpListener;
use tokio::net::TcpStream;
use tokio_util::codec::Framed;
use tungstenite::Message;
use structopt::StructOpt; // for from_args in main
type UnitAsyncResult = Result<(), std::io::Error>;
fn message_error<E: std::fmt::Display>(e: E) -> packets::Error {
packets::Error::Message(e.to_string())
}
fn encode_message(p: packets::S2C) ->
Result<Message, packets::Error>
{
let mut bs = Vec::with_capacity(128);
preserves::ser::to_writer(&mut preserves::value::PackedWriter::new(&mut bs), &p)?;
Ok(Message::Binary(bs))
}
fn message_encoder(p: packets::S2C) -> futures::future::Ready<Result<Message, packets::Error>>
{
futures::future::ready(encode_message(p))
}
async fn message_decoder(r: Result<Message, tungstenite::Error>) -> Option<Result<packets::C2S, packets::Error>>
{
match r {
Ok(ref m) => match m {
Message::Text(_) =>
Some(Err(preserves::error::syntax_error("Text websocket frames are not accepted"))),
Message::Binary(ref bs) =>
match preserves::de::from_bytes(bs) {
Ok(p) => Some(Ok(p)),
Err(e) => Some(Err(e.into())),
},
Message::Ping(_) =>
None, // pings are handled by tungstenite before we see them
Message::Pong(_) =>
None, // unsolicited pongs are to be ignored
Message::Close(_) =>
Some(Err(preserves::error::eof())),
}
Err(tungstenite::Error::Io(e)) =>
Some(Err(e.into())),
Err(e) =>
Some(Err(message_error(e))),
}
}
async fn run_connection(connid: ConnId,
mut stream: TcpStream,
spaces: Arc<Mutex<spaces::Spaces>>,
addr: std::net::SocketAddr,
config: config::ServerConfigRef) ->
UnitAsyncResult
{
let mut buf = [0; 1]; // peek at the first byte to see what kind of connection to expect
match stream.peek(&mut buf).await? {
1 => match buf[0] {
71 /* ASCII 'G' for "GET" */ => {
info!(protocol = display("websocket"), peer = debug(addr));
let s = tokio_tungstenite::accept_async(stream).await
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))?;
let (o, i) = s.split();
let i = i.filter_map(message_decoder);
let o = o.sink_map_err(message_error).with(message_encoder);
let mut p = Peer::new(connid, i, o);
p.run(spaces, &config).await?
},
_ => {
info!(protocol = display("raw"), peer = debug(addr));
let (o, i) = Framed::new(stream, packets::Codec::new()).split();
let mut p = Peer::new(connid, i, o);
p.run(spaces, &config).await?
}
}
0 => return Err(std::io::Error::new(std::io::ErrorKind::UnexpectedEof,
"closed before starting")),
_ => unreachable!()
}
Ok(())
}
static NEXT_ID: std::sync::atomic::AtomicU64 = std::sync::atomic::AtomicU64::new(1);
async fn run_listener(spaces: Arc<Mutex<spaces::Spaces>>, port: u16, config: config::ServerConfigRef) ->
UnitAsyncResult
{
let mut listener = TcpListener::bind(format!("0.0.0.0:{}", port)).await?;
loop {
let (stream, addr) = listener.accept().await?;
let id = NEXT_ID.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let spaces = Arc::clone(&spaces);
let config = Arc::clone(&config);
if let Some(n) = config.recv_buffer_size { stream.set_recv_buffer_size(n)?; }
if let Some(n) = config.send_buffer_size { stream.set_send_buffer_size(n)?; }
tokio::spawn(async move {
match run_connection(id, stream, spaces, addr, config).await {
Ok(()) => info!("closed"),
Err(e) => info!(error = display(e), "closed"),
}
}.instrument(tracing::info_span!("connection", id)));
}
}
async fn periodic_tasks(spaces: Arc<Mutex<spaces::Spaces>>) -> UnitAsyncResult {
let interval = core::time::Duration::from_secs(10);
let mut delay = tokio::time::interval(interval);
loop {
delay.next().await.unwrap();
{
let mut spaces = spaces.lock().unwrap();
spaces.cleanup();
spaces.dump_stats(interval);
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let filter = tracing_subscriber::filter::EnvFilter::from_default_env()
.add_directive(tracing_subscriber::filter::LevelFilter::INFO.into());
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_ansi(true)
.with_max_level(Level::TRACE)
.with_env_filter(filter)
.finish();
tracing::subscriber::set_global_default(subscriber)
.expect("Could not set tracing global subscriber");
{
const BRIGHT_GREEN: &str = "\x1b[92m";
const RED: &str = "\x1b[31m";
const GREEN: &str = "\x1b[32m";
const NORMAL: &str = "\x1b[0m";
const BRIGHT_YELLOW: &str = "\x1b[93m";
info!(r"{} ______ {}", GREEN, NORMAL);
info!(r"{} / {}\_{}\{} ", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
info!(r"{} / {},{}__/{} \ {} ____ __", GREEN, RED, BRIGHT_GREEN, GREEN, NORMAL);
info!(r"{} /{}\__/ \{},{} \{} _______ ______ ____/ /_/________ / /____", GREEN, BRIGHT_GREEN, RED, GREEN, NORMAL);
info!(r"{} \{}/ \__/ {}/{} / ___/ / / / __ \/ __ / / ___/ __ \/ __/ _ \", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
info!(r"{} \ {}'{} \__{}/ {} _\_ \/ /_/ / / / / /_/ / / /__/ /_/ / /_/ __/", GREEN, RED, BRIGHT_GREEN, GREEN, NORMAL);
info!(r"{} \____{}/{}_/ {} /____/\__, /_/ /_/\____/_/\___/\__/_/\__/\___/", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
info!(r" /____/");
// info!(r" {} __{}__{}__ {}", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// info!(r" {} /{}_/ \_{}\ {}", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// info!(r" {} / \__/ \ {} __ __", BRIGHT_GREEN, NORMAL);
// info!(r" {}/{}\__/ \__/{}\{} _______ ______ ____/ /__________ / /____", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// info!(r" {}\{}/ \__/ \{}/{} / ___/ / / / __ \/ __ / / ___/ __ \/ __/ _ \", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// info!(r" {} \__/ \__/ {} _\_ \/ /_/ / / / / /_/ / / /__/ /_/ / /_/ __/", BRIGHT_GREEN, NORMAL);
// info!(r" {} \_{}\__/{}_/ {} /____/\__, /_/ /_/\____/_/\___/\__/_/\__/\___/", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// info!(r" /____/");
info!(r"");
info!(r" {}version {}{}", BRIGHT_YELLOW, env!("CARGO_PKG_VERSION"), NORMAL);
info!(r"");
info!(r" documentation & reference material: https://syndicate-lang.org/");
info!(r" source code & bugs: https://git.syndicate-lang.org/syndicate-lang/syndicate-rs");
info!(r"");
}
let config = Arc::new(config::ServerConfig::from_args());
let spaces = Arc::new(Mutex::new(spaces::Spaces::new()));
let mut daemons = Vec::new();
{
let spaces = Arc::clone(&spaces);
tokio::spawn(async move {
periodic_tasks(spaces).await
});
}
trace!("startup");
for port in config.ports.clone() {
let spaces = Arc::clone(&spaces);
let config = Arc::clone(&config);
daemons.push(tokio::spawn(async move {
info!(port, "listening");
match run_listener(spaces, port, config).await {
Ok(()) => (),
Err(e) => error!("{}", e),
}
}.instrument(tracing::info_span!("listener", port))));
}
futures::future::join_all(daemons).await;
Ok(())
}

View File

@ -1,19 +0,0 @@
use structopt::StructOpt;
#[derive(Clone, StructOpt)]
pub struct ServerConfig {
#[structopt(short = "p", long = "port", default_value = "8001")]
pub ports: Vec<u16>,
#[structopt(long)]
pub recv_buffer_size: Option<usize>,
#[structopt(long)]
pub send_buffer_size: Option<usize>,
#[structopt(long, default_value = "10000")]
pub overload_threshold: usize,
#[structopt(long, default_value = "5")]
pub overload_turn_limit: usize,
}
pub type ServerConfigRef = std::sync::Arc<ServerConfig>;

View File

@ -1,209 +0,0 @@
use super::V;
use super::ConnId;
use super::packets::{self, Assertion, EndpointName};
use super::skeleton;
use preserves::value::{self, Map, NestedValue};
use std::sync::{Arc, RwLock, atomic::{AtomicUsize, Ordering}};
use tokio::sync::mpsc::UnboundedSender;
pub type DataspaceRef = Arc<RwLock<Dataspace>>;
pub type DataspaceError = (String, V);
#[derive(Debug)]
struct Actor {
tx: UnboundedSender<packets::S2C>,
queue_depth: Arc<AtomicUsize>,
endpoints: Map<EndpointName, ActorEndpoint>,
}
#[derive(Debug)]
struct ActorEndpoint {
analysis_results: Option<skeleton::AnalysisResults>,
assertion: Assertion,
}
#[derive(Debug)]
pub struct Churn {
pub peers_added: usize,
pub peers_removed: usize,
pub assertions_added: usize,
pub assertions_removed: usize,
pub endpoints_added: usize,
pub endpoints_removed: usize,
pub messages_injected: usize,
pub messages_delivered: usize,
}
impl Churn {
pub fn new() -> Self {
Self {
peers_added: 0,
peers_removed: 0,
assertions_added: 0,
assertions_removed: 0,
endpoints_added: 0,
endpoints_removed: 0,
messages_injected: 0,
messages_delivered: 0,
}
}
pub fn reset(&mut self) {
self.peers_added = 0;
self.peers_removed = 0;
self.assertions_added = 0;
self.assertions_removed = 0;
self.endpoints_added = 0;
self.endpoints_removed = 0;
self.messages_injected = 0;
self.messages_delivered = 0;
}
}
#[derive(Debug)]
pub struct Dataspace {
name: V,
peers: Map<ConnId, Actor>,
index: skeleton::Index,
pub churn: Churn,
}
impl Dataspace {
pub fn new(name: &V) -> Self {
Self {
name: name.clone(),
peers: Map::new(),
index: skeleton::Index::new(),
churn: Churn::new(),
}
}
pub fn new_ref(name: &V) -> DataspaceRef {
Arc::new(RwLock::new(Self::new(name)))
}
pub fn register(&mut self, id: ConnId,
tx: UnboundedSender<packets::S2C>,
queue_depth: Arc<AtomicUsize>)
{
assert!(!self.peers.contains_key(&id));
self.peers.insert(id, Actor {
tx,
queue_depth,
endpoints: Map::new(),
});
self.churn.peers_added += 1;
}
pub fn deregister(&mut self, id: ConnId) {
let ac = self.peers.remove(&id).unwrap();
self.churn.peers_removed += 1;
let mut outbound_turns: Map<ConnId, Vec<packets::Event>> = Map::new();
for (epname, ep) in ac.endpoints {
self.remove_endpoint(&mut outbound_turns, id, &epname, ep);
}
outbound_turns.remove(&id);
self.deliver_outbound_turns(outbound_turns);
}
fn remove_endpoint(&mut self,
mut outbound_turns: &mut Map<ConnId, Vec<packets::Event>>,
id: ConnId,
epname: &EndpointName,
ep: ActorEndpoint)
{
let ActorEndpoint{ analysis_results, assertion } = ep;
if let Some(ar) = analysis_results {
self.index.remove_endpoint(&ar, skeleton::Endpoint {
connection: id,
name: epname.clone(),
});
}
let old_assertions = self.index.assertion_count();
self.index.remove((&assertion).into(), &mut outbound_turns);
self.churn.assertions_removed += old_assertions - self.index.assertion_count();
self.churn.endpoints_removed += 1;
}
pub fn turn(&mut self, id: ConnId, actions: Vec<packets::Action>) ->
Result<(), DataspaceError>
{
let mut outbound_turns: Map<ConnId, Vec<packets::Event>> = Map::new();
for a in actions {
tracing::trace!(action = debug(&a), "turn");
match a {
packets::Action::Assert(ref epname, ref assertion) => {
let ac = self.peers.get_mut(&id).unwrap();
if ac.endpoints.contains_key(&epname) {
return Err(("Duplicate endpoint name".to_string(), value::to_value(a)));
}
let ar =
if let Some(fs) = assertion.value().as_simple_record("observe", Some(1)) {
let ar = skeleton::analyze(&fs[0]);
let events = self.index.add_endpoint(&ar, skeleton::Endpoint {
connection: id,
name: epname.clone(),
});
outbound_turns.entry(id).or_insert_with(Vec::new).extend(events);
Some(ar)
} else {
None
};
let old_assertions = self.index.assertion_count();
self.index.insert(assertion.into(), &mut outbound_turns);
self.churn.assertions_added += self.index.assertion_count() - old_assertions;
self.churn.endpoints_added += 1;
ac.endpoints.insert(epname.clone(), ActorEndpoint {
analysis_results: ar,
assertion: assertion.clone()
});
}
packets::Action::Clear(ref epname) => {
let ac = self.peers.get_mut(&id).unwrap();
match ac.endpoints.remove(epname) {
None => {
return Err(("Nonexistent endpoint name".to_string(), value::to_value(a)));
}
Some(ep) => {
self.remove_endpoint(&mut outbound_turns, id, epname, ep);
outbound_turns.entry(id).or_insert_with(Vec::new)
.push(packets::Event::End(epname.clone()));
}
}
}
packets::Action::Message(ref assertion) => {
self.index.send(assertion.into(),
&mut outbound_turns,
&mut self.churn.messages_delivered);
self.churn.messages_injected += 1;
}
}
}
self.deliver_outbound_turns(outbound_turns);
Ok(())
}
fn deliver_outbound_turns(&mut self, outbound_turns: Map<ConnId, Vec<packets::Event>>) {
for (target, events) in outbound_turns {
let actor = self.peers.get_mut(&target).unwrap();
let _ = actor.tx.send(packets::S2C::Turn(events));
actor.queue_depth.fetch_add(1, Ordering::Relaxed);
}
}
pub fn peer_count(&self) -> usize {
self.peers.len()
}
pub fn assertion_count(&self) -> usize {
self.index.assertion_count()
}
pub fn endpoint_count(&self) -> isize {
self.index.endpoint_count()
}
}

View File

@ -1,30 +0,0 @@
#![recursion_limit="512"]
pub mod bag;
pub mod config;
pub mod dataspace;
pub mod packets;
pub mod peer;
pub mod skeleton;
pub mod spaces;
pub use preserves::value;
// use std::sync::atomic::{AtomicUsize, Ordering};
//
// #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
// pub enum Syndicate {
// Placeholder(usize),
// }
//
// impl value::Domain for Syndicate {}
//
// static NEXT_PLACEHOLDER: AtomicUsize = AtomicUsize::new(0);
// impl Syndicate {
// pub fn new_placeholder() -> Self {
// Self::Placeholder(NEXT_PLACEHOLDER.fetch_add(1, Ordering::SeqCst))
// }
// }
pub type ConnId = u64;
pub type V = value::IOValue; // value::ArcValue<Syndicate>;

View File

@ -1,91 +0,0 @@
use super::V;
use bytes::{Buf, buf::BufMutExt, BytesMut};
use std::sync::Arc;
use std::marker::PhantomData;
use preserves::{
de::Deserializer,
error,
ser::to_writer,
value::{PackedReader, PackedWriter},
};
pub type EndpointName = V;
pub type Assertion = V;
pub type Captures = Arc<Vec<Assertion>>;
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub enum Action {
Assert(EndpointName, Assertion),
Clear(EndpointName),
Message(Assertion),
}
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub enum Event {
Add(EndpointName, Captures),
Del(EndpointName, Captures),
Msg(EndpointName, Captures),
End(EndpointName),
}
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub enum C2S {
Connect(V),
Turn(Vec<Action>),
Ping(),
Pong(),
}
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub enum S2C {
Err(String, V),
Turn(Vec<Event>),
Ping(),
Pong(),
}
//---------------------------------------------------------------------------
pub type Error = error::Error;
pub struct Codec<InT, OutT> {
ph_in: PhantomData<InT>,
ph_out: PhantomData<OutT>,
}
pub type ServerCodec = Codec<C2S, S2C>;
pub type ClientCodec = Codec<S2C, C2S>;
impl<InT, OutT> Codec<InT, OutT> {
pub fn new() -> Self {
Codec { ph_in: PhantomData, ph_out: PhantomData }
}
}
impl<InT: serde::de::DeserializeOwned, OutT> tokio_util::codec::Decoder for Codec<InT, OutT> {
type Item = InT;
type Error = Error;
fn decode(&mut self, bs: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
let mut r = PackedReader::decode_bytes(bs);
let mut d = Deserializer::from_reader(&mut r);
match Self::Item::deserialize(&mut d) {
Err(e) if error::is_eof_error(&e) => Ok(None),
Err(e) => Err(e),
Ok(item) => {
let count = d.read.source.index;
bs.advance(count);
Ok(Some(item))
}
}
}
}
impl<InT, OutT: serde::Serialize> tokio_util::codec::Encoder<OutT> for Codec<InT, OutT>
{
type Error = Error;
fn encode(&mut self, item: OutT, bs: &mut BytesMut) -> Result<(), Self::Error> {
to_writer(&mut PackedWriter::new(&mut bs.writer()), &item)
}
}

View File

@ -1,198 +0,0 @@
use super::V;
use super::ConnId;
use super::dataspace;
use super::packets;
use super::spaces;
use super::config;
use core::time::Duration;
use futures::{Sink, SinkExt, Stream};
use futures::FutureExt;
use futures::select;
use preserves::value;
use std::pin::Pin;
use std::sync::{Mutex, Arc, atomic::{AtomicUsize, Ordering}};
use tokio::stream::StreamExt;
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender, UnboundedReceiver, error::TryRecvError};
use tokio::time::interval;
pub type ResultC2S = Result<packets::C2S, packets::Error>;
pub struct Peer<I, O>
where I: Stream<Item = ResultC2S> + Send,
O: Sink<packets::S2C, Error = packets::Error>,
{
id: ConnId,
tx: UnboundedSender<packets::S2C>,
rx: UnboundedReceiver<packets::S2C>,
i: Pin<Box<I>>,
o: Pin<Box<O>>,
space: Option<dataspace::DataspaceRef>,
}
fn err(s: &str, ctx: V) -> packets::S2C {
packets::S2C::Err(s.into(), ctx)
}
impl<I, O> Peer<I, O>
where I: Stream<Item = ResultC2S> + Send,
O: Sink<packets::S2C, Error = packets::Error>,
{
pub fn new(id: ConnId, i: I, o: O) -> Self {
let (tx, rx) = unbounded_channel();
Peer{ id, tx, rx, i: Box::pin(i), o: Box::pin(o), space: None }
}
pub async fn run(&mut self, spaces: Arc<Mutex<spaces::Spaces>>, config: &config::ServerConfig) ->
Result<(), packets::Error>
{
let firstpacket = self.i.next().await;
let dsname = if let Some(Ok(packets::C2S::Connect(dsname))) = firstpacket {
dsname
} else {
let e = format!("Expected initial Connect, got {:?}", firstpacket);
self.o.send(err(&e, value::FALSE.clone())).await?;
return Err(preserves::error::syntax_error(&e))
};
self.space = Some(spaces.lock().unwrap().lookup(&dsname));
let queue_depth = Arc::new(AtomicUsize::new(0));
self.space.as_ref().unwrap().write().unwrap().register(
self.id,
self.tx.clone(),
Arc::clone(&queue_depth));
let mut ping_timer = interval(Duration::from_secs(60));
let mut running = true;
let mut overloaded = None;
let mut previous_sample = None;
while running {
let mut to_send = Vec::new();
let queue_depth_sample = queue_depth.load(Ordering::Relaxed);
if queue_depth_sample > config.overload_threshold {
let n = overloaded.unwrap_or(0);
tracing::warn!(turns=n, queue_depth=queue_depth_sample, "overloaded");
if n == config.overload_turn_limit {
to_send.push(err("Overloaded",
value::Value::from(queue_depth_sample as u64).wrap()));
running = false;
} else {
if queue_depth_sample > previous_sample.unwrap_or(0) {
overloaded = Some(n + 1)
} else {
overloaded = Some(0)
}
}
} else {
if let Some(_) = overloaded {
tracing::info!(queue_depth=queue_depth_sample, "recovered");
}
overloaded = None;
}
previous_sample = Some(queue_depth_sample);
select! {
_instant = ping_timer.next().boxed().fuse() => to_send.push(packets::S2C::Ping()),
frame = self.i.next().fuse() => match frame {
Some(res) => match res {
Ok(p) => {
tracing::trace!(packet = debug(&p), "input");
match p {
packets::C2S::Turn(actions) => {
match self.space.as_ref().unwrap().write().unwrap()
.turn(self.id, actions)
{
Ok(()) => (),
Err((msg, ctx)) => {
to_send.push(err(&msg, ctx));
running = false;
}
}
}
packets::C2S::Ping() =>
to_send.push(packets::S2C::Pong()),
packets::C2S::Pong() =>
(),
packets::C2S::Connect(_) => {
to_send.push(err("Unexpected Connect", value::to_value(p)));
running = false;
}
}
}
Err(e) if preserves::error::is_eof_error(&e) => {
tracing::trace!("eof");
running = false;
}
Err(e) if preserves::error::is_syntax_error(&e) => {
to_send.push(err(&e.to_string(), value::FALSE.clone()));
running = false;
}
Err(e) => {
if preserves::error::is_io_error(&e) {
return Err(e);
} else {
to_send.push(err(&format!("Packet deserialization error: {}", e),
value::FALSE.clone()));
running = false;
}
}
}
None => {
tracing::trace!("remote has closed");
running = false;
}
},
msgopt = self.rx.recv().boxed().fuse() => {
let mut ok = true;
match msgopt {
Some(msg) => {
to_send.push(msg);
loop {
match self.rx.try_recv() {
Ok(m) => to_send.push(m),
Err(TryRecvError::Empty) => {
queue_depth.store(0, Ordering::Relaxed);
break;
}
Err(TryRecvError::Closed) => {
ok = false;
break;
}
}
}
}
None => ok = false,
}
if !ok {
/* weird. */
to_send.push(err("Outbound channel closed unexpectedly", value::FALSE.clone()));
running = false;
}
},
}
for v in to_send {
if let packets::S2C::Err(ref msg, ref ctx) = v {
tracing::error!(context = debug(ctx), msg = display(msg), "error");
} else {
tracing::trace!(packet = debug(&v), "output");
}
self.o.send(v).await?;
}
tokio::task::yield_now().await;
}
Ok(())
}
}
impl<I, O> Drop for Peer<I, O>
where I: Stream<Item = ResultC2S> + Send,
O: Sink<packets::S2C, Error = packets::Error>,
{
fn drop(&mut self) {
if let Some(ref s) = self.space {
s.write().unwrap().deregister(self.id);
}
}
}

View File

@ -1,609 +0,0 @@
use super::ConnId;
use super::bag;
use super::packets::Assertion;
use super::packets::Captures;
use super::packets::EndpointName;
use super::packets::Event;
use preserves::value::{Map, Set, Value, NestedValue};
use std::cmp::Ordering;
use std::collections::btree_map::Entry;
use std::sync::Arc;
type Bag<A> = bag::BTreeBag<A>;
pub type Path = Vec<usize>;
pub type Paths = Vec<Path>;
pub type Events = Vec<Event>;
pub type TurnMap = Map<ConnId, Events>;
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct Endpoint {
pub connection: ConnId,
pub name: EndpointName,
}
#[derive(Debug)]
pub enum Skeleton {
Blank,
Guarded(Guard, Vec<Skeleton>)
}
#[derive(Debug)]
pub struct AnalysisResults {
pub skeleton: Skeleton,
pub const_paths: Paths,
pub const_vals: Captures,
pub capture_paths: Paths,
pub assertion: Assertion,
}
#[derive(Debug)]
pub struct Index {
all_assertions: Bag<CachedAssertion>,
root: Node,
}
impl Index {
pub fn new() -> Self {
Index{ all_assertions: Bag::new(), root: Node::new(Continuation::new(Set::new())) }
}
pub fn add_endpoint(&mut self, analysis_results: &AnalysisResults, endpoint: Endpoint) -> Events
{
let continuation = self.root.extend(&analysis_results.skeleton);
let continuation_cached_assertions = &continuation.cached_assertions;
let const_val_map =
continuation.leaf_map.entry(analysis_results.const_paths.clone()).or_insert_with(|| {
let mut cvm = Map::new();
for a in continuation_cached_assertions {
let key = project_paths(a.unscope(), &analysis_results.const_paths);
cvm.entry(key).or_insert_with(Leaf::new).cached_assertions.insert(a.clone());
}
cvm
});
let capture_paths = &analysis_results.capture_paths;
let leaf = const_val_map.entry(analysis_results.const_vals.clone()).or_insert_with(Leaf::new);
let leaf_cached_assertions = &leaf.cached_assertions;
let endpoints = leaf.endpoints_map.entry(capture_paths.clone()).or_insert_with(|| {
let mut b = Bag::new();
for a in leaf_cached_assertions {
let (restriction_paths, term) = a.unpack();
if is_unrestricted(&capture_paths, restriction_paths) {
let captures = project_paths(term, &capture_paths);
*b.entry(captures).or_insert(0) += 1;
}
}
Endpoints::new(b)
});
let endpoint_name = endpoint.name.clone();
endpoints.endpoints.insert(endpoint);
endpoints.cached_captures.into_iter()
.map(|(cs,_)| Event::Add(endpoint_name.clone(), cs.clone()))
.collect()
}
pub fn remove_endpoint(&mut self, analysis_results: &AnalysisResults, endpoint: Endpoint) {
let continuation = self.root.extend(&analysis_results.skeleton);
if let Entry::Occupied(mut const_val_map_entry)
= continuation.leaf_map.entry(analysis_results.const_paths.clone())
{
let const_val_map = const_val_map_entry.get_mut();
if let Entry::Occupied(mut leaf_entry)
= const_val_map.entry(analysis_results.const_vals.clone())
{
let leaf = leaf_entry.get_mut();
if let Entry::Occupied(mut endpoints_entry)
= leaf.endpoints_map.entry(analysis_results.capture_paths.clone())
{
let endpoints = endpoints_entry.get_mut();
endpoints.endpoints.remove(&endpoint);
if endpoints.endpoints.is_empty() {
endpoints_entry.remove_entry();
}
}
if leaf.is_empty() {
leaf_entry.remove_entry();
}
}
if const_val_map.is_empty() {
const_val_map_entry.remove_entry();
}
}
}
pub fn insert(&mut self, outer_value: CachedAssertion, outputs: &mut TurnMap) {
let net = self.all_assertions.change(outer_value.clone(), 1);
match net {
bag::Net::AbsentToPresent => {
Modification::new(
true,
&outer_value,
|c, v| { c.cached_assertions.insert(v.clone()); },
|l, v| { l.cached_assertions.insert(v.clone()); },
|es, cs| {
if es.cached_captures.change(cs.clone(), 1) == bag::Net::AbsentToPresent {
for ep in &es.endpoints {
outputs.entry(ep.connection).or_insert_with(Vec::new)
.push(Event::Add(ep.name.clone(), cs.clone()))
}
}
})
.perform(&mut self.root);
}
bag::Net::PresentToPresent => (),
_ => unreachable!(),
}
}
pub fn remove(&mut self, outer_value: CachedAssertion, outputs: &mut TurnMap) {
let net = self.all_assertions.change(outer_value.clone(), -1);
match net {
bag::Net::PresentToAbsent => {
Modification::new(
false,
&outer_value,
|c, v| { c.cached_assertions.remove(v); },
|l, v| { l.cached_assertions.remove(v); },
|es, cs| {
if es.cached_captures.change(cs.clone(), -1) == bag::Net::PresentToAbsent {
for ep in &es.endpoints {
outputs.entry(ep.connection).or_insert_with(Vec::new)
.push(Event::Del(ep.name.clone(), cs.clone()))
}
}
})
.perform(&mut self.root);
}
bag::Net::PresentToPresent => (),
_ => unreachable!(),
}
}
pub fn send(&mut self,
outer_value: CachedAssertion,
outputs: &mut TurnMap,
delivery_count: &mut usize)
{
Modification::new(
false,
&outer_value,
|_c, _v| (),
|_l, _v| (),
|es, cs| {
*delivery_count += es.endpoints.len();
for ep in &es.endpoints {
outputs.entry(ep.connection).or_insert_with(Vec::new)
.push(Event::Msg(ep.name.clone(), cs.clone()))
}
}).perform(&mut self.root);
}
pub fn assertion_count(&self) -> usize {
return self.all_assertions.len()
}
pub fn endpoint_count(&self) -> isize {
return self.all_assertions.total()
}
}
#[derive(Debug)]
struct Node {
continuation: Continuation,
edges: Map<Selector, Map<Guard, Node>>,
}
impl Node {
fn new(continuation: Continuation) -> Self {
Node { continuation, edges: Map::new() }
}
fn extend(&mut self, skeleton: &Skeleton) -> &mut Continuation {
let (_pop_count, final_node) = self.extend_walk(&mut Vec::new(), 0, 0, skeleton);
&mut final_node.continuation
}
fn extend_walk(&mut self, path: &mut Path, pop_count: usize, index: usize, skeleton: &Skeleton)
-> (usize, &mut Node) {
match skeleton {
Skeleton::Blank => (pop_count, self),
Skeleton::Guarded(cls, kids) => {
let selector = Selector { pop_count, index };
let continuation = &self.continuation;
let table = self.edges.entry(selector).or_insert_with(Map::new);
let mut next_node = table.entry(cls.clone()).or_insert_with(|| {
Self::new(Continuation::new(
continuation.cached_assertions.iter()
.filter(|a| {
Some(cls) == class_of(project_path(a.unscope(), path)).as_ref() })
.cloned()
.collect()))
});
let mut pop_count = 0;
for (index, kid) in kids.iter().enumerate() {
path.push(index);
let (pc, nn) = next_node.extend_walk(path, pop_count, index, kid);
pop_count = pc;
next_node = nn;
path.pop();
}
(pop_count + 1, next_node)
}
}
}
}
#[derive(Debug)]
pub enum Stack<'a, T> {
Empty,
Item(T, &'a Stack<'a, T>)
}
impl<'a, T> Stack<'a, T> {
fn pop(&self) -> &Self {
match self {
Stack::Empty => panic!("Internal error: pop: Incorrect pop_count computation"),
Stack::Item(_, tail) => tail
}
}
fn top(&self) -> &T {
match self {
Stack::Empty => panic!("Internal error: top: Incorrect pop_count computation"),
Stack::Item(item, _) => item
}
}
}
struct Modification<'op, FCont, FLeaf, FEndpoints>
where FCont: FnMut(&mut Continuation, &CachedAssertion) -> (),
FLeaf: FnMut(&mut Leaf, &CachedAssertion) -> (),
FEndpoints: FnMut(&mut Endpoints, Captures) -> ()
{
create_leaf_if_absent: bool,
outer_value: &'op CachedAssertion,
restriction_paths: Option<&'op Paths>,
outer_value_term: &'op Assertion,
m_cont: FCont,
m_leaf: FLeaf,
m_endpoints: FEndpoints,
}
impl<'op, FCont, FLeaf, FEndpoints> Modification<'op, FCont, FLeaf, FEndpoints>
where FCont: FnMut(&mut Continuation, &CachedAssertion) -> (),
FLeaf: FnMut(&mut Leaf, &CachedAssertion) -> (),
FEndpoints: FnMut(&mut Endpoints, Captures) -> ()
{
fn new(create_leaf_if_absent: bool,
outer_value: &'op CachedAssertion,
m_cont: FCont,
m_leaf: FLeaf,
m_endpoints: FEndpoints) -> Self {
let (restriction_paths, outer_value_term) = outer_value.unpack();
Modification {
create_leaf_if_absent,
outer_value,
restriction_paths,
outer_value_term,
m_cont,
m_leaf,
m_endpoints,
}
}
fn perform(&mut self, n: &mut Node) {
self.node(n, &Stack::Item(&Value::from(vec![self.outer_value_term.clone()]).wrap(), &Stack::Empty))
}
fn node(&mut self, n: &mut Node, term_stack: &Stack<&Assertion>) {
self.continuation(&mut n.continuation);
for (selector, table) in &mut n.edges {
let mut next_stack = term_stack;
for _ in 0..selector.pop_count { next_stack = next_stack.pop() }
let next_value = step(next_stack.top(), selector.index);
if let Some(next_class) = class_of(next_value) {
if let Some(next_node) = table.get_mut(&next_class) {
self.node(next_node, &Stack::Item(next_value, next_stack))
}
}
}
}
fn continuation(&mut self, c: &mut Continuation) {
(self.m_cont)(c, self.outer_value);
let mut empty_const_paths = Vec::new();
for (const_paths, const_val_map) in &mut c.leaf_map {
let const_vals = project_paths(self.outer_value_term, const_paths);
let leaf_opt = if self.create_leaf_if_absent {
Some(const_val_map.entry(const_vals.clone()).or_insert_with(Leaf::new))
} else {
const_val_map.get_mut(&const_vals)
};
if let Some(leaf) = leaf_opt {
(self.m_leaf)(leaf, self.outer_value);
for (capture_paths, endpoints) in &mut leaf.endpoints_map {
if is_unrestricted(&capture_paths, self.restriction_paths) {
(self.m_endpoints)(endpoints,
project_paths(self.outer_value_term, &capture_paths));
}
}
if leaf.is_empty() {
const_val_map.remove(&const_vals);
if const_val_map.is_empty() {
empty_const_paths.push(const_paths.clone());
}
}
}
}
for const_paths in empty_const_paths {
c.leaf_map.remove(&const_paths);
}
}
}
fn class_of(v: &Assertion) -> Option<Guard> {
match v.value() {
Value::Sequence(ref vs) => Some(Guard::Seq(vs.len())),
Value::Record(ref r) => Some(Guard::Rec(r.label().clone(), r.arity())),
_ => None,
}
}
fn project_path<'a>(v: &'a Assertion, p: &Path) -> &'a Assertion {
let mut v = v;
for i in p {
v = step(v, *i);
}
v
}
fn project_paths<'a>(v: &'a Assertion, ps: &Paths) -> Captures {
Arc::new(ps.iter().map(|p| project_path(v, p)).cloned().collect())
}
fn step(v: &Assertion, i: usize) -> &Assertion {
match v.value() {
Value::Sequence(ref vs) => &vs[i],
Value::Record(ref r) => &r.fields()[i],
_ => panic!("step: non-sequence, non-record {:?}", v)
}
}
#[derive(Debug)]
struct Continuation {
cached_assertions: Set<CachedAssertion>,
leaf_map: Map<Paths, Map<Captures, Leaf>>,
}
impl Continuation {
fn new(cached_assertions: Set<CachedAssertion>) -> Self {
Continuation { cached_assertions, leaf_map: Map::new() }
}
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord)]
struct Selector {
pop_count: usize,
index: usize,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub enum Guard {
Rec(Assertion, usize),
Seq(usize),
}
impl Guard {
fn arity(&self) -> usize {
match self {
Guard::Rec(_, s) => *s,
Guard::Seq(s) => *s
}
}
}
#[derive(Debug)]
struct Leaf { // aka Topic
cached_assertions: Set<CachedAssertion>,
endpoints_map: Map<Paths, Endpoints>,
}
impl Leaf {
fn new() -> Self {
Leaf { cached_assertions: Set::new(), endpoints_map: Map::new() }
}
fn is_empty(&self) -> bool {
self.cached_assertions.is_empty() && self.endpoints_map.is_empty()
}
}
#[derive(Debug)]
struct Endpoints {
cached_captures: Bag<Captures>,
endpoints: Set<Endpoint>,
}
impl Endpoints {
fn new(cached_captures: Bag<Captures>) -> Self {
Endpoints { cached_captures, endpoints: Set::new() }
}
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub enum CachedAssertion {
VisibilityRestricted(Paths, Assertion),
Unrestricted(Assertion),
}
impl From<&Assertion> for CachedAssertion {
fn from(a: &Assertion) -> Self {
CachedAssertion::Unrestricted(a.clone())
}
}
impl CachedAssertion {
fn unscope(&self) -> &Assertion {
match self {
CachedAssertion::VisibilityRestricted(_, a) => a,
CachedAssertion::Unrestricted(a) => a,
}
}
fn unpack(&self) -> (Option<&Paths>, &Assertion) {
match self {
CachedAssertion::VisibilityRestricted(ps, a) => (Some(ps), a),
CachedAssertion::Unrestricted(a) => (None, a),
}
}
}
fn is_unrestricted(capture_paths: &Paths, restriction_paths: Option<&Paths>) -> bool {
// We are "unrestricted" if Set(capture_paths) ⊆ Set(restriction_paths). Since both
// variables really hold lists, we operate with awareness of the order the lists are
// built here. We know that the lists are built in fringe order; that is, they are
// sorted wrt `pathCmp`.
match restriction_paths {
None => true, // not visibility-restricted in the first place
Some(rpaths) => {
let mut rpi = rpaths.iter();
'outer: for c in capture_paths {
'inner: loop {
match rpi.next() {
None => {
// there's at least one capture_paths entry (`c`) that does
// not appear in restriction_paths, so we are restricted
return false;
}
Some(r) => match c.cmp(r) {
Ordering::Less => {
// `c` is less than `r`, but restriction_paths is sorted,
// so `c` does not appear in restriction_paths, and we are
// thus restricted.
return false;
}
Ordering::Equal => {
// `c` is equal to `r`, so we may yet be unrestricted.
// Discard both `c` and `r` and continue.
continue 'outer;
}
Ordering::Greater => {
// `c` is greater than `r`, but capture_paths and
// restriction_paths are sorted, so while we might yet
// come to an `r` that is equal to `c`, we will never find
// another `c` that is less than this `c`. Discard this
// `r` then, keeping the `c`, and compare against the next
// `r`.
continue 'inner;
}
}
}
}
}
// We went all the way through capture_paths without finding any `c` not in
// restriction_paths.
true
}
}
}
pub struct Analyzer {
const_paths: Paths,
const_vals: Vec<Assertion>,
capture_paths: Paths,
path: Path,
}
impl Analyzer {
fn walk(&mut self, mut a: &Assertion) -> Skeleton {
while let Some(fields) = a.value().as_simple_record("capture", Some(1)) {
self.capture_paths.push(self.path.clone());
a = &fields[0];
}
if a.value().is_simple_record("discard", Some(0)) {
Skeleton::Blank
} else {
match class_of(a) {
Some(cls) => {
let arity = cls.arity();
Skeleton::Guarded(cls,
(0..arity).map(|i| {
self.path.push(i);
let s = self.walk(step(a, i));
self.path.pop();
s
}).collect())
}
None => {
self.const_paths.push(self.path.clone());
self.const_vals.push(a.clone());
Skeleton::Blank
}
}
}
}
}
pub fn analyze(a: &Assertion) -> AnalysisResults {
let mut z = Analyzer {
const_paths: Vec::new(),
const_vals: Vec::new(),
capture_paths: Vec::new(),
path: Vec::new(),
};
let skeleton = z.walk(a);
AnalysisResults {
skeleton,
const_paths: z.const_paths,
const_vals: Arc::new(z.const_vals),
capture_paths: z.capture_paths,
assertion: a.clone(),
}
}
// pub fn instantiate_assertion(a: &Assertion, cs: Captures) -> CachedAssertion {
// let mut capture_paths = Vec::new();
// let mut path = Vec::new();
// let mut vs: Vec<Assertion> = (*cs).clone();
// vs.reverse();
// let instantiated = instantiate_assertion_walk(&mut capture_paths, &mut path, &mut vs, a);
// CachedAssertion::VisibilityRestricted(capture_paths, instantiated)
// }
// fn instantiate_assertion_walk(capture_paths: &mut Paths,
// path: &mut Path,
// vs: &mut Vec<Assertion>,
// a: &Assertion) -> Assertion {
// if let Some(fields) = a.value().as_simple_record("capture", Some(1)) {
// capture_paths.push(path.clone());
// let v = vs.pop().unwrap();
// instantiate_assertion_walk(capture_paths, path, vs, &fields[0]);
// v
// } else if a.value().is_simple_record("discard", Some(0)) {
// Value::Domain(Syndicate::new_placeholder()).wrap()
// } else {
// let f = |(i, aa)| {
// path.push(i);
// let vv = instantiate_assertion_walk(capture_paths,
// path,
// vs,
// aa);
// path.pop();
// vv
// };
// match class_of(a) {
// Some(Guard::Seq(_)) =>
// Value::from(Vec::from_iter(a.value().as_sequence().unwrap()
// .iter().enumerate().map(f)))
// .wrap(),
// Some(Guard::Rec(l, fieldcount)) =>
// Value::record(l, a.value().as_record(Some(fieldcount)).unwrap().1
// .iter().enumerate().map(f).collect())
// .wrap(),
// None =>
// a.clone(),
// }
// }
// }

View File

@ -1,54 +0,0 @@
use super::V;
use super::dataspace;
use std::sync::Arc;
use tracing::{info, debug};
use preserves::value::Map;
pub struct Spaces {
index: Map<V, dataspace::DataspaceRef>,
}
impl Spaces {
pub fn new() -> Self {
Self { index: Map::new() }
}
pub fn lookup(&mut self, name: &V) -> dataspace::DataspaceRef {
let (is_new, space) = match self.index.get(name) {
Some(s) => (false, s.clone()),
None => {
let s = dataspace::Dataspace::new_ref(name);
self.index.insert(name.clone(), s.clone());
(true, s)
}
};
debug!(name = debug(name),
action = display(if is_new { "created" } else { "accessed" }));
space
}
pub fn cleanup(&mut self) {
self.index = self.index.iter()
.filter(|s| s.1.read().unwrap().peer_count() > 0)
.map(|(k,v)| (k.clone(), Arc::clone(v)))
.collect();
}
pub fn dump_stats(&self, delta: core::time::Duration) {
for (dsname, dsref) in &self.index {
let mut ds = dsref.write().unwrap();
info!(name = debug(dsname),
connections = display(format!("{} (+{}/-{})", ds.peer_count(), ds.churn.peers_added, ds.churn.peers_removed)),
assertions = display(format!("{} (+{}/-{})", ds.assertion_count(), ds.churn.assertions_added, ds.churn.assertions_removed)),
endpoints = display(format!("{} (+{}/-{})", ds.endpoint_count(), ds.churn.endpoints_added, ds.churn.endpoints_removed)),
msg_in_rate = display(ds.churn.messages_injected as f32 / delta.as_secs() as f32),
msg_out_rate = display(ds.churn.messages_delivered as f32 / delta.as_secs() as f32));
ds.churn.reset();
}
}
}

View File

@ -0,0 +1,27 @@
[package]
name = "syndicate-macros"
version = "0.32.0"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
description = "Support macros for programming with the Syndicated Actor model and Dataspaces."
homepage = "https://syndicate-lang.org/"
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
license = "Apache-2.0"
[lib]
proc-macro = true
[dependencies]
syndicate = { path = "../syndicate", version = "0.40.0"}
proc-macro2 = { version = "^1.0", features = ["span-locations"] }
quote = "^1.0"
syn = { version = "^1.0", features = ["extra-traits"] } # for impl Debug for syn::Expr
[dev-dependencies]
tokio = { version = "1.10", features = ["io-std"] }
tracing = "0.1"
[package.metadata.workspaces]
independent = true

View File

@ -0,0 +1,82 @@
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::dataspace::Dataspace;
use syndicate::language;
use syndicate::schemas::dataspace::Observe;
use syndicate::value::NestedValue;
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
Actor::top(None, |t| {
let ds = Cap::new(&t.create(Dataspace::new(None)));
let _ = t.prevent_inert_check();
t.spawn(Some(AnyValue::symbol("box")), enclose!((ds) move |t| {
let current_value = t.named_field("current_value", 0u64);
t.dataflow({
let mut state_assertion_handle = None;
enclose!((ds, current_value) move |t| {
let v = AnyValue::new(*t.get(&current_value));
tracing::info!(?v, "asserting");
ds.update(t, &mut state_assertion_handle, &(),
Some(&syndicate_macros::template!("<box-state =v>")));
Ok(())
})
})?;
let set_box_handler = syndicate::entity(())
.on_message(enclose!((current_value) move |(), t, captures: AnyValue| {
let v = captures.value().to_sequence()?[0].value().to_u64()?;
tracing::info!(?v, "from set-box");
t.set(&current_value, v);
Ok(())
}))
.create_cap(t);
ds.assert(t, language(), &Observe {
pattern: syndicate_macros::pattern!{<set-box $>},
observer: set_box_handler,
});
t.dataflow(enclose!((current_value) move |t| {
if *t.get(&current_value) == 1000000 {
t.stop();
}
Ok(())
}))?;
Ok(())
}));
t.spawn(Some(AnyValue::symbol("client")), enclose!((ds) move |t| {
let box_state_handler = syndicate::entity(0u32)
.on_asserted(enclose!((ds) move |count, t, captures: AnyValue| {
*count = *count + 1;
let value = captures.value().to_sequence()?[0].value().to_u64()?;
tracing::info!(?value);
let next = AnyValue::new(value + 1);
tracing::info!(?next, "sending");
ds.message(t, &(), &syndicate_macros::template!("<set-box =next>"));
Ok(Some(Box::new(|count, t| {
*count = *count - 1;
if *count == 0 {
tracing::info!("box state retracted");
t.stop();
}
Ok(())
})))
}))
.create_cap(t);
ds.assert(t, language(), &Observe {
pattern: syndicate_macros::pattern!{<box-state $>},
observer: box_state_handler,
});
Ok(())
}));
Ok(())
}).await??;
Ok(())
}

View File

@ -0,0 +1,133 @@
use syndicate::actor::*;
use std::env;
use std::sync::Arc;
#[derive(Debug)]
enum Instruction {
SetPeer(Arc<Ref<Instruction>>),
HandleMessage(u64),
}
struct Forwarder {
hop_limit: u64,
supervisor: Arc<Ref<Instruction>>,
peer: Option<Arc<Ref<Instruction>>>,
}
impl Drop for Forwarder {
fn drop(&mut self) {
let r = self.peer.take();
let _ = tokio::spawn(async move {
drop(r);
});
}
}
impl Entity<Instruction> for Forwarder {
fn message(&mut self, turn: &mut Activation, message: Instruction) -> ActorResult {
match message {
Instruction::SetPeer(r) => {
tracing::info!("Setting peer {:?}", r);
self.peer = Some(r);
}
Instruction::HandleMessage(n) => {
let target = if n >= self.hop_limit { &self.supervisor } else { self.peer.as_ref().expect("peer") };
turn.message(target, Instruction::HandleMessage(n + 1));
}
}
Ok(())
}
}
struct Supervisor {
latency_mode: bool,
total_transfers: u64,
remaining_to_receive: u32,
start_time: Option<std::time::Instant>,
}
impl Entity<Instruction> for Supervisor {
fn message(&mut self, turn: &mut Activation, message: Instruction) -> ActorResult {
match message {
Instruction::SetPeer(_) => {
tracing::info!("Start");
self.start_time = Some(std::time::Instant::now());
},
Instruction::HandleMessage(_n) => {
self.remaining_to_receive -= 1;
if self.remaining_to_receive == 0 {
let stop_time = std::time::Instant::now();
let duration = stop_time - self.start_time.unwrap();
tracing::info!("Stop after {:?}; {:?} messages, so {:?} Hz ({} mode)",
duration,
self.total_transfers,
(1000.0 * self.total_transfers as f64) / duration.as_millis() as f64,
if self.latency_mode { "latency" } else { "throughput" });
turn.stop_root();
}
},
}
Ok(())
}
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
Actor::top(None, |t| {
let args: Vec<String> = env::args().collect();
let n_actors: u32 = args.get(1).unwrap_or(&"1000000".to_string()).parse()?;
let n_rounds: u32 = args.get(2).unwrap_or(&"200".to_string()).parse()?;
let latency_mode: bool = match args.get(3).unwrap_or(&"throughput".to_string()).as_str() {
"latency" => true,
"throughput" => false,
_other => return Err("Invalid throughput/latency mode".into()),
};
tracing::info!("Will run {:?} actors for {:?} rounds", n_actors, n_rounds);
let total_transfers: u64 = n_actors as u64 * n_rounds as u64;
let (hop_limit, injection_count) = if latency_mode {
(total_transfers, 1)
} else {
(n_rounds as u64, n_actors)
};
let me = t.create(Supervisor {
latency_mode,
total_transfers,
remaining_to_receive: injection_count,
start_time: None,
});
let mut forwarders: Vec<Arc<Ref<Instruction>>> = Vec::new();
for _i in 0 .. n_actors {
if _i % 10000 == 0 { tracing::info!("Actor {:?}", _i); }
forwarders.push(
t.spawn_for_entity(None, true, Box::new(
Forwarder {
hop_limit,
supervisor: me.clone(),
peer: forwarders.last().cloned(),
}))
.0.expect("an entity"));
}
t.message(&forwarders[0], Instruction::SetPeer(forwarders.last().expect("an entity").clone()));
t.later(move |t| {
t.message(&me, Instruction::SetPeer(me.clone()));
t.later(move |t| {
let mut injected: u32 = 0;
for f in forwarders.into_iter() {
if injected >= injection_count {
break;
}
t.message(&f, Instruction::HandleMessage(0));
injected += 1;
}
Ok(())
});
Ok(())
});
Ok(())
}).await??;
Ok(())
}

View File

@ -0,0 +1,175 @@
use std::env;
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
use std::sync::atomic::Ordering;
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender};
type Ref<T> = UnboundedSender<Box<T>>;
#[derive(Debug)]
enum Instruction {
SetPeer(Arc<Ref<Instruction>>),
HandleMessage(u64),
}
struct Forwarder {
hop_limit: u64,
supervisor: Arc<Ref<Instruction>>,
peer: Option<Arc<Ref<Instruction>>>,
}
impl Drop for Forwarder {
fn drop(&mut self) {
let r = self.peer.take();
let _ = tokio::spawn(async move {
drop(r);
});
}
}
enum Action { Continue, Stop }
trait Actor<T> {
fn message(&mut self, message: T) -> Action;
}
fn send<T: std::marker::Send + 'static>(ch: &Arc<Ref<T>>, message: T) -> () {
match ch.send(Box::new(message)) {
Ok(()) => (),
Err(v) => panic!("Aiee! Could not send {:?}", v),
}
}
fn spawn<T: std::marker::Send + 'static, R: Actor<T> + std::marker::Send + 'static>(rt: Option<Arc<AtomicU64>>, mut ac: R) -> Arc<Ref<T>> {
let (tx, mut rx) = unbounded_channel::<Box<T>>();
if let Some(ref c) = rt {
c.fetch_add(1, Ordering::SeqCst);
}
tokio::spawn(async move {
loop {
match rx.recv().await {
None => break,
Some(message) => {
match ac.message(*message) {
Action::Continue => continue,
Action::Stop => break,
}
}
}
}
if let Some(c) = rt {
c.fetch_sub(1, Ordering::SeqCst);
}
});
Arc::new(tx)
}
impl Actor<Instruction> for Forwarder {
fn message(&mut self, message: Instruction) -> Action {
match message {
Instruction::SetPeer(r) => {
tracing::info!("Setting peer {:?}", r);
self.peer = Some(r);
}
Instruction::HandleMessage(n) => {
let target = if n >= self.hop_limit { &self.supervisor } else { self.peer.as_ref().expect("peer") };
send(target, Instruction::HandleMessage(n + 1));
}
}
Action::Continue
}
}
struct Supervisor {
latency_mode: bool,
total_transfers: u64,
remaining_to_receive: u32,
start_time: Option<std::time::Instant>,
}
impl Actor<Instruction> for Supervisor {
fn message(&mut self, message: Instruction) -> Action {
match message {
Instruction::SetPeer(_) => {
tracing::info!("Start");
self.start_time = Some(std::time::Instant::now());
},
Instruction::HandleMessage(_n) => {
self.remaining_to_receive -= 1;
if self.remaining_to_receive == 0 {
let stop_time = std::time::Instant::now();
let duration = stop_time - self.start_time.unwrap();
tracing::info!("Stop after {:?}; {:?} messages, so {:?} Hz ({} mode)",
duration,
self.total_transfers,
(1000.0 * self.total_transfers as f64) / duration.as_millis() as f64,
if self.latency_mode { "latency" } else { "throughput" });
return Action::Stop;
}
},
}
Action::Continue
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + std::marker::Send + std::marker::Sync>> {
syndicate::convenient_logging()?;
let args: Vec<String> = env::args().collect();
let n_actors: u32 = args.get(1).unwrap_or(&"1000000".to_string()).parse()?;
let n_rounds: u32 = args.get(2).unwrap_or(&"200".to_string()).parse()?;
let latency_mode: bool = match args.get(3).unwrap_or(&"throughput".to_string()).as_str() {
"latency" => true,
"throughput" => false,
_other => return Err("Invalid throughput/latency mode".into()),
};
tracing::info!("Will run {:?} actors for {:?} rounds", n_actors, n_rounds);
let count = Arc::new(AtomicU64::new(0));
let total_transfers: u64 = n_actors as u64 * n_rounds as u64;
let (hop_limit, injection_count) = if latency_mode {
(total_transfers, 1)
} else {
(n_rounds as u64, n_actors)
};
let me = spawn(Some(count.clone()), Supervisor {
latency_mode,
total_transfers,
remaining_to_receive: injection_count,
start_time: None,
});
let mut forwarders: Vec<Arc<Ref<Instruction>>> = Vec::new();
for _i in 0 .. n_actors {
if _i % 10000 == 0 { tracing::info!("Actor {:?}", _i); }
forwarders.push(spawn(None, Forwarder {
hop_limit,
supervisor: me.clone(),
peer: forwarders.last().cloned(),
}));
}
send(&forwarders[0], Instruction::SetPeer(forwarders.last().expect("an entity").clone()));
send(&me, Instruction::SetPeer(me.clone()));
let mut injected: u32 = 0;
for f in forwarders.into_iter() {
if injected >= injection_count {
break;
}
send(&f, Instruction::HandleMessage(0));
injected += 1;
}
loop {
if count.load(Ordering::SeqCst) == 0 {
break;
}
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
}
Ok(())
}

138
syndicate-macros/src/dur.rs Normal file
View File

@ -0,0 +1,138 @@
use proc_macro2::Span;
use quote::quote_spanned;
use syn::parse_macro_input;
use syn::Expr;
use syn::Ident;
use syn::LitInt;
use syn::Token;
use syn::Type;
use syn::parse::Error;
use syn::parse::Parse;
use syn::parse::ParseStream;
use crate::stx::Stx;
use crate::pat;
#[derive(Debug)]
struct During {
turn_stx: Expr,
ds_stx: Expr,
lang_stx: Expr,
pat_stx: Stx,
body_stx: Expr,
}
fn comma_parse<T: Parse>(input: ParseStream) -> syn::parse::Result<T> {
let _: Token![,] = input.parse()?;
input.parse()
}
impl Parse for During {
fn parse(input: ParseStream) -> syn::parse::Result<Self> {
Ok(During {
turn_stx: input.parse()?,
ds_stx: comma_parse(input)?,
lang_stx: comma_parse(input)?,
pat_stx: comma_parse(input)?,
body_stx: comma_parse(input)?,
})
}
}
impl During {
fn bindings(&self) -> (Vec<Ident>, Vec<Type>, Vec<LitInt>) {
let mut ids = vec![];
let mut tys = vec![];
let mut indexes = vec![];
for (i, (maybe_id, ty)) in self.pat_stx.bindings().into_iter().enumerate() {
if let Some(id) = maybe_id {
indexes.push(LitInt::new(&i.to_string(), id.span()));
ids.push(id);
tys.push(ty);
}
}
(ids, tys, indexes)
}
}
pub fn during(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
let d = parse_macro_input!(src as During);
let During { turn_stx, ds_stx, lang_stx, pat_stx, body_stx } = &d;
let (varname_stx, type_stx, index_stx) = d.bindings();
let binding_count = varname_stx.len();
let pat_stx_expr = match pat::to_pattern_expr(pat_stx) {
Ok(e) => e,
Err(e) => return Error::new(Span::call_site(), e).to_compile_error().into(),
};
(quote_spanned!{Span::mixed_site()=> {
let __ds = #ds_stx.clone();
let __lang = #lang_stx;
let monitor = syndicate::during::entity(())
.on_asserted_facet(move |_, t, captures: syndicate::actor::AnyValue| {
if let Some(captures) = {
use syndicate::value::NestedValue;
use syndicate::value::Value;
captures.value().as_sequence()
}{
if captures.len() == #binding_count {
#(let #varname_stx: #type_stx = match {
use syndicate::preserves_schema::Codec;
__lang.parse(&captures[#index_stx])
} {
Ok(v) => v,
Err(_) => return Ok(()),
};)*
return (#body_stx)(t);
}
}
Ok(())
})
.create_cap(#turn_stx);
__ds.assert(#turn_stx, __lang, &syndicate::schemas::dataspace::Observe {
pattern: #pat_stx_expr,
observer: monitor,
});
}}).into()
}
pub fn on_message(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
let d = parse_macro_input!(src as During);
let During { turn_stx, ds_stx, lang_stx, pat_stx, body_stx } = &d;
let (varname_stx, type_stx, index_stx) = d.bindings();
let binding_count = varname_stx.len();
let pat_stx_expr = match pat::to_pattern_expr(pat_stx) {
Ok(e) => e,
Err(e) => return Error::new(Span::call_site(), e).to_compile_error().into(),
};
(quote_spanned!{Span::mixed_site()=> {
let __ds = #ds_stx.clone();
let __lang = #lang_stx;
let monitor = syndicate::during::entity(())
.on_message(move |_, t, captures: syndicate::actor::AnyValue| {
if let Some(captures) = {
use syndicate::value::NestedValue;
use syndicate::value::Value;
captures.value().as_sequence()
}{
if captures.len() == #binding_count {
#(let #varname_stx: #type_stx = match {
use syndicate::preserves_schema::Codec;
__lang.parse(&captures[#index_stx])
} {
Ok(v) => v,
Err(_) => return Ok(()),
};)*
return (#body_stx)(t);
}
}
Ok(())
})
.create_cap(#turn_stx);
__ds.assert(#turn_stx, __lang, &syndicate::schemas::dataspace::Observe {
pattern: #pat_stx_expr,
observer: monitor,
});
}}).into()
}

262
syndicate-macros/src/lib.rs Normal file
View File

@ -0,0 +1,262 @@
#![feature(proc_macro_span)]
use syndicate::value::IOValue;
use syndicate::value::NestedValue;
use syndicate::value::Value;
use syndicate::value::text::iovalue_from_str;
use proc_macro2::Span;
use proc_macro2::TokenStream;
use quote::quote;
use std::convert::TryFrom;
use syn::parse_macro_input;
use syn::ExprLit;
use syn::Ident;
use syn::Lit;
use syn::LitByteStr;
mod dur;
mod pat;
mod stx;
mod val;
use pat::lit;
enum SymbolVariant<'a> {
Normal(&'a str),
#[allow(dead_code)] // otherwise we get 'warning: field `0` is never read'
Binder(&'a str),
Substitution(&'a str),
Discard,
}
fn compile_sequence_members(vs: &[IOValue]) -> Vec<TokenStream> {
vs.iter().enumerate().map(|(i, f)| {
let p = compile_pattern(f);
quote!((syndicate::value::Value::from(#i).wrap(), #p))
}).collect::<Vec<_>>()
}
fn analyze_symbol(s: &str, allow_binding_and_substitution: bool) -> SymbolVariant {
if !allow_binding_and_substitution {
SymbolVariant::Normal(s)
} else if s.starts_with("$") {
SymbolVariant::Binder(&s[1..])
} else if s.starts_with("=") {
SymbolVariant::Substitution(&s[1..])
} else if s == "_" {
SymbolVariant::Discard
} else {
SymbolVariant::Normal(s)
}
}
struct ValueCompiler {
allow_binding_and_substitution: bool,
}
impl ValueCompiler {
fn for_patterns() -> Self {
ValueCompiler {
allow_binding_and_substitution: false,
}
}
fn for_templates() -> Self {
ValueCompiler {
allow_binding_and_substitution: true,
}
}
fn compile(&self, v: &IOValue) -> TokenStream {
#[allow(non_snake_case)]
let V_: TokenStream = quote!(syndicate::value);
let walk = |w| self.compile(w);
match v.value() {
Value::Boolean(b) =>
quote!(#V_::Value::from(#b).wrap()),
Value::Double(d) => {
let d = d.0;
quote!(#V_::Value::from(#d).wrap())
}
Value::SignedInteger(i) => {
let i = i128::try_from(i).expect("Literal integer out-of-range");
quote!(#V_::Value::from(#i).wrap())
}
Value::String(s) =>
quote!(#V_::Value::from(#s).wrap()),
Value::ByteString(bs) => {
let bs = LitByteStr::new(bs, Span::call_site());
quote!(#V_::Value::from(#bs).wrap())
}
Value::Symbol(s) => match analyze_symbol(&s, self.allow_binding_and_substitution) {
SymbolVariant::Normal(s) =>
quote!(#V_::Value::symbol(#s).wrap()),
SymbolVariant::Binder(_) |
SymbolVariant::Discard =>
panic!("Binding/Discard not supported here"),
SymbolVariant::Substitution(s) => {
let i = Ident::new(s, Span::call_site());
quote!(#i)
}
}
Value::Record(r) => {
let arity = r.arity();
let label = walk(r.label());
let fs: Vec<_> = r.fields().iter().map(walk).collect();
quote!({
let mut ___r = #V_::Value::record(#label, #arity);
#(___r.fields_vec_mut().push(#fs);)*
___r.finish().wrap()
})
}
Value::Sequence(vs) => {
let vs: Vec<_> = vs.iter().map(walk).collect();
quote!(#V_::Value::from(vec![#(#vs),*]).wrap())
}
Value::Set(vs) => {
let vs: Vec<_> = vs.iter().map(walk).collect();
quote!({
let mut ___s = #V_::Set::new();
#(___s.insert(#vs);)*
#V_::Value::from(___s).wrap()
})
}
Value::Dictionary(d) => {
let members: Vec<_> = d.iter().map(|(k, v)| {
let k = walk(k);
let v = walk(v);
quote!(___d.insert(#k, #v))
}).collect();
quote!({
let mut ___d = #V_::Map::new();
#(#members;)*
#V_::Value::from(___d).wrap()
})
}
Value::Embedded(_) =>
panic!("Embedded values in compile-time Preserves templates not (yet?) supported"),
}
}
}
fn compile_pattern(v: &IOValue) -> TokenStream {
#[allow(non_snake_case)]
let P_: TokenStream = quote!(syndicate::schemas::dataspace_patterns);
#[allow(non_snake_case)]
let V_: TokenStream = quote!(syndicate::value);
#[allow(non_snake_case)]
let MapFrom_: TokenStream = quote!(<#V_::Map<_, _>>::from);
match v.value() {
Value::Symbol(s) => match analyze_symbol(&s, true) {
SymbolVariant::Binder(_) =>
quote!(#P_::Pattern::Bind{ pattern: Box::new(#P_::Pattern::Discard) }),
SymbolVariant::Discard =>
quote!(#P_::Pattern::Discard),
SymbolVariant::Substitution(s) =>
lit(Ident::new(s, Span::call_site())),
SymbolVariant::Normal(_) =>
lit(ValueCompiler::for_patterns().compile(v)),
}
Value::Record(r) => {
match r.label().value().as_symbol() {
None => panic!("Record labels in patterns must be symbols"),
Some(label) =>
if label.starts_with("$") && r.arity() == 1 {
let nested = compile_pattern(&r.fields()[0]);
quote!(#P_::Pattern::Bind{ pattern: Box::new(#nested) })
} else {
let label_stx = if label.starts_with("=") {
let id = Ident::new(&label[1..], Span::call_site());
quote!(#id)
} else {
quote!(#V_::Value::symbol(#label).wrap())
};
let members = compile_sequence_members(r.fields());
quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Rec { label: #label_stx }),
entries: #MapFrom_([#(#members),*]),
})
}
}
}
Value::Sequence(vs) => {
let members = compile_sequence_members(vs);
quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Arr),
entries: #MapFrom_([#(#members),*]),
})
}
Value::Set(_) =>
panic!("Cannot match sets in patterns"),
Value::Dictionary(d) => {
let members = d.iter().map(|(k, v)| {
let k = ValueCompiler::for_patterns().compile(k);
let v = compile_pattern(v);
quote!((#k, #v))
}).collect::<Vec<_>>();
quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Dict),
entries: #MapFrom_([#(#members),*]),
})
}
_ => lit(ValueCompiler::for_patterns().compile(v)),
}
}
#[proc_macro]
pub fn pattern_str(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
if let Lit::Str(s) = parse_macro_input!(src as ExprLit).lit {
match iovalue_from_str(&s.value()) {
Ok(v) => {
let e = compile_pattern(&v);
// println!("{:#}", &e);
return e.into();
}
Err(_) => (),
}
}
panic!("Expected literal string containing the pattern and no more");
}
#[proc_macro]
pub fn template(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
if let Lit::Str(s) = parse_macro_input!(src as ExprLit).lit {
match iovalue_from_str(&s.value()) {
Ok(v) => {
let e = ValueCompiler::for_templates().compile(&v);
// println!("{:#}", &e);
return e.into();
}
Err(_) => (),
}
}
panic!("Expected literal string containing the template and no more");
}
//---------------------------------------------------------------------------
#[proc_macro]
pub fn pattern(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
pat::pattern(src)
}
//---------------------------------------------------------------------------
#[proc_macro]
pub fn during(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
dur::during(src)
}
#[proc_macro]
pub fn on_message(src: proc_macro::TokenStream) -> proc_macro::TokenStream {
dur::on_message(src)
}

View File

@ -0,0 +1,87 @@
use proc_macro::TokenStream;
use proc_macro2::TokenStream as TokenStream2;
use quote::ToTokens;
use quote::quote;
use syn::parse_macro_input;
use crate::stx::Stx;
use crate::val::to_value_expr;
use crate::val::value_to_value_expr;
pub fn lit<T: ToTokens>(e: T) -> TokenStream2 {
quote!(syndicate::pattern::lift_literal(#e))
}
fn compile_sequence_members(stxs: &Vec<Stx>) -> Result<Vec<TokenStream2>, &'static str> {
stxs.iter().enumerate().map(|(i, stx)| {
let p = to_pattern_expr(stx)?;
Ok(quote!((syndicate::value::Value::from(#i).wrap(), #p)))
}).collect()
}
pub fn to_pattern_expr(stx: &Stx) -> Result<TokenStream2, &'static str> {
#[allow(non_snake_case)]
let P_: TokenStream2 = quote!(syndicate::schemas::dataspace_patterns);
#[allow(non_snake_case)]
let V_: TokenStream2 = quote!(syndicate::value);
#[allow(non_snake_case)]
let MapFrom_: TokenStream2 = quote!(<#V_::Map<_, _>>::from);
match stx {
Stx::Atom(v) =>
Ok(lit(value_to_value_expr(&v))),
Stx::Binder(_, maybe_ty, maybe_pat) => {
let inner_pat_expr = match maybe_pat {
Some(p) => to_pattern_expr(&*p)?,
None => match maybe_ty {
Some(ty) => quote!(#ty::wildcard_dataspace_pattern()),
None => to_pattern_expr(&Stx::Discard)?,
}
};
Ok(quote!(#P_::Pattern::Bind { pattern: Box::new(#inner_pat_expr) }))
}
Stx::Subst(e) =>
Ok(lit(e)),
Stx::Discard =>
Ok(quote!(#P_::Pattern::Discard)),
Stx::Rec(l, fs) => {
let label = to_value_expr(&*l)?;
let members = compile_sequence_members(fs)?;
Ok(quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Rec { label: #label }),
entries: #MapFrom_([#(#members),*]),
}))
},
Stx::Seq(stxs) => {
let members = compile_sequence_members(stxs)?;
Ok(quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Arr),
entries: #MapFrom_([#(#members),*]),
}))
}
Stx::Set(_stxs) =>
Err("Set literals not supported in patterns"),
Stx::Dict(d) => {
let members = d.iter().map(|(k, v)| {
let k = to_value_expr(k)?;
let v = to_pattern_expr(v)?;
Ok(quote!((#k, #v)))
}).collect::<Result<Vec<_>, &'static str>>()?;
Ok(quote!(#P_::Pattern::Group {
type_: Box::new(#P_::GroupType::Dict),
entries: #MapFrom_([#(#members),*])
}))
}
}
}
pub fn pattern(src: TokenStream) -> TokenStream {
let src2 = src.clone();
let e = to_pattern_expr(&parse_macro_input!(src2 as Stx))
.expect("Cannot compile pattern").into();
// println!("\n{:#} -->\n{:#}\n", &src, &e);
e
}

283
syndicate-macros/src/stx.rs Normal file
View File

@ -0,0 +1,283 @@
use proc_macro2::Delimiter;
use proc_macro2::LineColumn;
use proc_macro2::Span;
use proc_macro2::TokenStream;
use syn::ExprLit;
use syn::Ident;
use syn::Lit;
use syn::Result;
use syn::Type;
use syn::buffer::Cursor;
use syn::parse::Error;
use syn::parse::Parse;
use syn::parse::Parser;
use syn::parse::ParseStream;
use syn::parse_str;
use syndicate::value::Double;
use syndicate::value::IOValue;
use syndicate::value::NestedValue;
#[derive(Debug, Clone)]
pub enum Stx {
Atom(IOValue),
Binder(Option<Ident>, Option<Type>, Option<Box<Stx>>),
Discard,
Subst(TokenStream),
Rec(Box<Stx>, Vec<Stx>),
Seq(Vec<Stx>),
Set(Vec<Stx>),
Dict(Vec<(Stx, Stx)>),
}
impl Parse for Stx {
fn parse(input: ParseStream) -> Result<Self> {
input.step(|c| parse1(*c))
}
}
impl Stx {
pub fn bindings(&self) -> Vec<(Option<Ident>, Type)> {
let mut bs = vec![];
self._bindings(&mut bs);
bs
}
fn _bindings(&self, bs: &mut Vec<(Option<Ident>, Type)>) {
match self {
Stx::Atom(_) | Stx::Discard | Stx::Subst(_) => (),
Stx::Binder(id, ty, pat) => {
bs.push((id.clone(),
ty.clone().unwrap_or_else(
|| parse_str("syndicate::actor::AnyValue").unwrap())));
if let Some(p) = pat {
p._bindings(bs);
}
},
Stx::Rec(l, fs) => {
l._bindings(bs);
fs.iter().for_each(|f| f._bindings(bs));
},
Stx::Seq(vs) => vs.iter().for_each(|v| v._bindings(bs)),
Stx::Set(vs) => vs.iter().for_each(|v| v._bindings(bs)),
Stx::Dict(kvs) => kvs.iter().for_each(|(_k, v)| v._bindings(bs)),
}
}
}
fn punct_char(c: Cursor) -> Option<(char, Cursor)> {
c.punct().map(|(p, c)| (p.as_char(), c))
}
fn start_pos(s: Span) -> LineColumn {
// We would like to write
// s.start()
// here, but until [1] is fixed (perhaps via [2]), we have to go the unsafe route
// and assume we are in procedural macro context.
// [1]: https://github.com/dtolnay/proc-macro2/issues/402
// [2]: https://github.com/dtolnay/proc-macro2/pull/407
let u = s.unwrap().start();
LineColumn { column: u.column(), line: u.line() }
}
fn end_pos(s: Span) -> LineColumn {
// See start_pos
let u = s.unwrap().end();
LineColumn { column: u.column(), line: u.line() }
}
fn parse_id(mut c: Cursor) -> Result<(String, Cursor)> {
let mut id = String::new();
let mut prev_pos = start_pos(c.span());
loop {
if c.eof() || start_pos(c.span()) != prev_pos {
return Ok((id, c));
} else if let Some((p, next)) = c.punct() {
match p.as_char() {
'<' | '>' | '(' | ')' | '{' | '}' | '[' | ']' | ',' | ':' => return Ok((id, c)),
ch => {
id.push(ch);
prev_pos = end_pos(c.span());
c = next;
}
}
} else if let Some((i, next)) = c.ident() {
id.push_str(&i.to_string());
prev_pos = end_pos(i.span());
c = next;
} else {
return Ok((id, c));
}
}
}
fn parse_seq(delim_ch: char, mut c: Cursor) -> Result<(Vec<Stx>, Cursor)> {
let mut stxs = Vec::new();
loop {
c = skip_commas(c);
if c.eof() {
return Err(Error::new(c.span(), &format!("Expected {:?}", delim_ch)));
}
if let Some((p, next)) = c.punct() {
if p.as_char() == delim_ch {
return Ok((stxs, next));
}
}
let (stx, next) = parse1(c)?;
stxs.push(stx);
c = next;
}
}
fn skip_commas(mut c: Cursor) -> Cursor {
loop {
if let Some((',', next)) = punct_char(c) {
c = next;
continue;
}
return c;
}
}
fn parse_group<'c, R, F: Fn(Cursor<'c>) -> Result<(R, Cursor<'c>)>>(
mut c: Cursor<'c>,
f: F,
after: Cursor<'c>,
) -> Result<(Vec<R>, Cursor<'c>)> {
let mut stxs = Vec::new();
loop {
c = skip_commas(c);
if c.eof() {
return Ok((stxs, after));
}
let (stx, next) = f(c)?;
stxs.push(stx);
c = next;
}
}
fn parse_kv(c: Cursor) -> Result<((Stx, Stx), Cursor)> {
let (k, c) = parse1(c)?;
if let Some((':', c)) = punct_char(c) {
let (v, c) = parse1(c)?;
return Ok(((k, v), c));
}
Err(Error::new(c.span(), "Expected ':'"))
}
fn adjacent_ident(pos: LineColumn, c: Cursor) -> (Option<Ident>, Cursor) {
if start_pos(c.span()) != pos {
(None, c)
} else if let Some((id, next)) = c.ident() {
(Some(id), next)
} else {
(None, c)
}
}
fn parse_exactly_one<'c>(c: Cursor<'c>) -> Result<Stx> {
parse1(c).and_then(|(q, c)| if c.eof() {
Ok(q)
} else {
Err(Error::new(c.span(), "No more input expected"))
})
}
fn parse_generic<T: Parse>(mut c: Cursor) -> Option<(T, Cursor)> {
match T::parse.parse2(c.token_stream()) {
Ok(t) => Some((t, Cursor::empty())), // because parse2 checks for end-of-stream!
Err(e) => {
// OK, because parse2 checks for end-of-stream, let's chop
// the input at the position of the error and try again (!).
let mut collected = Vec::new();
let upto = start_pos(e.span());
while !c.eof() && start_pos(c.span()) != upto {
let (tt, next) = c.token_tree().unwrap();
collected.push(tt);
c = next;
}
match T::parse.parse2(collected.into_iter().collect()) {
Ok(t) => Some((t, c)),
Err(_) => None,
}
}
}
}
fn parse1(c: Cursor) -> Result<(Stx, Cursor)> {
if let Some((p, next)) = c.punct() {
match p.as_char() {
'<' => parse_seq('>', next).and_then(|(mut q,c)| if q.is_empty() {
Err(Error::new(c.span(), "Missing Record label"))
} else {
Ok((Stx::Rec(Box::new(q.remove(0)), q), c))
}),
'$' => {
let (maybe_id, next) = adjacent_ident(end_pos(p.span()), next);
let (maybe_type, next) = if let Some((':', next)) = punct_char(next) {
match parse_generic::<Type>(next) {
Some((t, next)) => (Some(t), next),
None => (None, next)
}
} else {
(None, next)
};
if let Some((inner, _, next)) = next.group(Delimiter::Brace) {
parse_exactly_one(inner).map(
|q| (Stx::Binder(maybe_id, maybe_type, Some(Box::new(q))), next))
} else {
Ok((Stx::Binder(maybe_id, maybe_type, None), next))
}
}
'#' => {
if let Some((inner, _, next)) = next.group(Delimiter::Brace) {
parse_group(inner, parse1, next).map(|(q,c)| (Stx::Set(q),c))
} else if let Some((inner, _, next)) = next.group(Delimiter::Parenthesis) {
Ok((Stx::Subst(inner.token_stream()), next))
} else if let Some((tt, next)) = next.token_tree() {
Ok((Stx::Subst(vec![tt].into_iter().collect()), next))
} else {
Err(Error::new(c.span(), "Expected expression to substitute"))
}
}
_ => Err(Error::new(c.span(), "Unexpected punctuation")),
}
} else if let Some((i, next)) = c.ident() {
if i.to_string() == "_" {
Ok((Stx::Discard, next))
} else {
parse_id(c).and_then(|(q,c)| Ok((Stx::Atom(IOValue::symbol(&q)), c)))
}
} else if let Some((literal, next)) = c.literal() {
let t: ExprLit = syn::parse_str(&literal.to_string())?;
let v = match t.lit {
Lit::Str(s) => IOValue::new(s.value()),
Lit::ByteStr(bs) => IOValue::new(&bs.value()[..]),
Lit::Byte(b) => IOValue::new(b.value()),
Lit::Char(_) => return Err(Error::new(c.span(), "Literal characters not supported")),
Lit::Int(i) => if i.suffix().starts_with("u") || !i.base10_digits().starts_with("-") {
IOValue::new(i.base10_parse::<u128>()?)
} else {
IOValue::new(i.base10_parse::<i128>()?)
}
Lit::Float(f) => if f.suffix() == "f32" {
IOValue::new(&Double(f.base10_parse::<f32>()? as f64))
} else {
IOValue::new(&Double(f.base10_parse::<f64>()?))
}
Lit::Bool(_) => return Err(Error::new(c.span(), "Literal booleans not supported")),
Lit::Verbatim(_) => return Err(Error::new(c.span(), "Verbatim literals not supported")),
};
Ok((Stx::Atom(v), next))
} else if let Some((inner, _, after)) = c.group(Delimiter::Brace) {
parse_group(inner, parse_kv, after).map(|(q,c)| (Stx::Dict(q),c))
} else if let Some((inner, _, after)) = c.group(Delimiter::Bracket) {
parse_group(inner, parse1, after).map(|(q,c)| (Stx::Seq(q),c))
} else {
Err(Error::new(c.span(), "Unexpected input"))
}
}

103
syndicate-macros/src/val.rs Normal file
View File

@ -0,0 +1,103 @@
use proc_macro2::Span;
use proc_macro2::TokenStream as TokenStream2;
use quote::quote;
use std::convert::TryFrom;
use syn::LitByteStr;
use syndicate::value::IOValue;
use syndicate::value::NestedValue;
use syndicate::value::Value;
use crate::stx::Stx;
pub fn emit_record(label: TokenStream2, fs: &[TokenStream2]) -> TokenStream2 {
let arity = fs.len();
quote!({
let mut ___r = syndicate::value::Value::record(#label, #arity);
#(___r.fields_vec_mut().push(#fs);)*
___r.finish().wrap()
})
}
pub fn emit_seq(vs: &[TokenStream2]) -> TokenStream2 {
quote!(syndicate::value::Value::from(vec![#(#vs),*]).wrap())
}
pub fn emit_set(vs: &[TokenStream2]) -> TokenStream2 {
quote!({
let mut ___s = syndicate::value::Set::new();
#(___s.insert(#vs);)*
syndicate::value::Value::from(___s).wrap()
})
}
pub fn emit_dict<'a, I: Iterator<Item = (TokenStream2, TokenStream2)>>(d: I) -> TokenStream2 {
let members: Vec<_> = d.map(|(k, v)| quote!(___d.insert(#k, #v))).collect();
quote!({
let mut ___d = syndicate::value::Map::new();
#(#members;)*
syndicate::value::Value::from(___d).wrap()
})
}
pub fn value_to_value_expr(v: &IOValue) -> TokenStream2 {
#[allow(non_snake_case)]
let V_: TokenStream2 = quote!(syndicate::value);
match v.value() {
Value::Boolean(b) =>
quote!(#V_::Value::from(#b).wrap()),
Value::Double(d) => {
let d = d.0;
quote!(#V_::Value::from(#d).wrap())
}
Value::SignedInteger(i) => {
let i = i128::try_from(i).expect("Literal integer out-of-range");
quote!(#V_::Value::from(#i).wrap())
}
Value::String(s) =>
quote!(#V_::Value::from(#s).wrap()),
Value::ByteString(bs) => {
let bs = LitByteStr::new(bs, Span::call_site());
quote!(#V_::Value::ByteString(#bs.to_vec()).wrap())
}
Value::Symbol(s) =>
quote!(#V_::Value::symbol(#s).wrap()),
Value::Record(r) =>
emit_record(value_to_value_expr(r.label()),
&r.fields().iter().map(value_to_value_expr).collect::<Vec<_>>()),
Value::Sequence(vs) =>
emit_seq(&vs.iter().map(value_to_value_expr).collect::<Vec<_>>()),
Value::Set(vs) =>
emit_set(&vs.iter().map(value_to_value_expr).collect::<Vec<_>>()),
Value::Dictionary(d) =>
emit_dict(d.into_iter().map(|(k, v)| (value_to_value_expr(k), value_to_value_expr(v)))),
Value::Embedded(_) =>
panic!("Embedded values in compile-time Preserves templates not (yet?) supported"),
}
}
pub fn to_value_expr(stx: &Stx) -> Result<TokenStream2, &'static str> {
match stx {
Stx::Atom(v) => Ok(value_to_value_expr(&v)),
Stx::Binder(_, _, _) => Err("Cannot use binder in literal value"),
Stx::Discard => Err("Cannot use discard in literal value"),
Stx::Subst(e) => Ok(e.clone().into()),
Stx::Rec(l, fs) =>
Ok(emit_record(to_value_expr(&*l)?,
&fs.into_iter().map(to_value_expr).collect::<Result<Vec<_>,_>>()?)),
Stx::Seq(vs) =>
Ok(emit_seq(&vs.into_iter().map(to_value_expr).collect::<Result<Vec<_>,_>>()?)),
Stx::Set(vs) =>
Ok(emit_set(&vs.into_iter().map(to_value_expr).collect::<Result<Vec<_>,_>>()?)),
Stx::Dict(kvs) =>
Ok(emit_dict(kvs.into_iter()
.map(|(k, v)| Ok((to_value_expr(k)?, to_value_expr(v)?)))
.collect::<Result<Vec<_>,&'static str>>()?
.into_iter())),
}
}

View File

@ -0,0 +1,15 @@
{
"folders": [
{
"path": "."
},
{
"path": "../syndicate-protocols"
}
],
"settings": {
"files.exclude": {
"target": true
}
}
}

View File

@ -0,0 +1,19 @@
[package]
name = "syndicate-schema-plugin"
version = "0.9.0"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
description = "Support for using Preserves Schema with Syndicate macros."
homepage = "https://syndicate-lang.org/"
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
license = "Apache-2.0"
[lib]
[dependencies]
preserves-schema = "5.995"
syndicate = { path = "../syndicate", version = "0.40.0"}
[package.metadata.workspaces]
independent = true

View File

@ -0,0 +1,3 @@
mod pattern_plugin;
pub use pattern_plugin::PatternPlugin;

View File

@ -0,0 +1,164 @@
use preserves_schema::*;
use preserves_schema::compiler::*;
use preserves_schema::compiler::context::ModuleContext;
use preserves_schema::compiler::types::definition_type;
use preserves_schema::compiler::types::Purpose;
use preserves_schema::gen::schema::*;
use preserves_schema::syntax::block::escape_string;
use preserves_schema::syntax::block::constructors::*;
use std::iter::FromIterator;
use syndicate::pattern::lift_literal;
use syndicate::schemas::dataspace_patterns as P;
use syndicate::value::IOValue;
use syndicate::value::Map;
use syndicate::value::NestedValue;
#[derive(Debug)]
pub struct PatternPlugin;
type WalkState<'a, 'm, 'b> =
preserves_schema::compiler::cycles::WalkState<&'a ModuleContext<'m, 'b>>;
impl Plugin for PatternPlugin {
fn generate_definition(
&self,
ctxt: &mut ModuleContext,
definition_name: &str,
definition: &Definition,
) {
if ctxt.mode == context::ModuleContextMode::TargetGeneric {
let mut s = WalkState::new(ctxt, ctxt.module_path.clone());
if let Some(p) = definition.wc(&mut s) {
let ty = definition_type(&ctxt.module_path,
Purpose::Codegen,
definition_name,
definition);
let v = syndicate::language().unparse(&p);
let v = preserves_schema::support::preserves::value::TextWriter::encode(
&mut preserves_schema::support::preserves::value::NoEmbeddedDomainCodec,
&v).unwrap();
ctxt.define_type(item(seq![
"impl",
ty.generic_decl(ctxt),
" ",
names::render_constructor(definition_name),
ty.generic_arg(ctxt),
" ", codeblock![
seq!["#[allow(unused)] pub fn wildcard_dataspace_pattern() ",
"-> syndicate::schemas::dataspace_patterns::Pattern ",
codeblock![
"use syndicate::schemas::dataspace_patterns::*;",
"use preserves_schema::Codec;",
seq!["let _v = syndicate::value::text::from_str(",
escape_string(&v),
", syndicate::value::ViaCodec::new(syndicate::value::NoEmbeddedDomainCodec)).unwrap();"],
"syndicate::language().parse(&_v).unwrap()"]]]]));
}
}
}
}
fn discard() -> P::Pattern {
P::Pattern::Discard
}
trait WildcardPattern {
fn wc(&self, s: &mut WalkState) -> Option<P::Pattern>;
}
impl WildcardPattern for Definition {
fn wc(&self, s: &mut WalkState) -> Option<P::Pattern> {
match self {
Definition::Or { .. } => None,
Definition::And { .. } => None,
Definition::Pattern(p) => p.wc(s),
}
}
}
impl WildcardPattern for Pattern {
fn wc(&self, s: &mut WalkState) -> Option<P::Pattern> {
match self {
Pattern::CompoundPattern(p) => p.wc(s),
Pattern::SimplePattern(p) => p.wc(s),
}
}
}
fn from_io(v: &IOValue) -> Option<P::_Any> {
Some(v.value().copy_via(&mut |_| Err(())).ok()?.wrap())
}
impl WildcardPattern for CompoundPattern {
fn wc(&self, s: &mut WalkState) -> Option<P::Pattern> {
match self {
CompoundPattern::Tuple { patterns } |
CompoundPattern::TuplePrefix { fixed: patterns, .. }=>
Some(P::Pattern::Group {
type_: Box::new(P::GroupType::Arr),
entries: patterns.iter().enumerate()
.map(|(i, p)| Some((P::_Any::new(i), unname(p).wc(s)?)))
.collect::<Option<Map<P::_Any, P::Pattern>>>()?,
}),
CompoundPattern::Dict { entries } =>
Some(P::Pattern::Group {
type_: Box::new(P::GroupType::Dict),
entries: Map::from_iter(
entries.0.iter()
.map(|(k, p)| Some((from_io(k)?, unname_simple(p).wc(s)?)))
.filter(|e| discard() != e.as_ref().unwrap().1)
.collect::<Option<Vec<(P::_Any, P::Pattern)>>>()?
.into_iter()),
}),
CompoundPattern::Rec { label, fields } => match (unname(label), unname(fields)) {
(Pattern::SimplePattern(label), Pattern::CompoundPattern(fields)) =>
match (*label, *fields) {
(SimplePattern::Lit { value }, CompoundPattern::Tuple { patterns }) =>
Some(P::Pattern::Group{
type_: Box::new(P::GroupType::Rec { label: from_io(&value)? }),
entries: patterns.iter().enumerate()
.map(|(i, p)| Some((P::_Any::new(i), unname(p).wc(s)?)))
.collect::<Option<Map<P::_Any, P::Pattern>>>()?,
}),
_ => None,
},
_ => None,
},
}
}
}
impl WildcardPattern for SimplePattern {
fn wc(&self, s: &mut WalkState) -> Option<P::Pattern> {
match self {
SimplePattern::Any |
SimplePattern::Atom { .. } |
SimplePattern::Embedded { .. } |
SimplePattern::Seqof { .. } |
SimplePattern::Setof { .. } |
SimplePattern::Dictof { .. } => Some(discard()),
SimplePattern::Lit { value } => Some(lift_literal(&from_io(value)?)),
SimplePattern::Ref(r) => s.cycle_check(
r,
|ctxt, r| ctxt.bundle.lookup_definition(r).map(|v| v.0),
|s, d| d.and_then(|d| d.wc(s)).or_else(|| Some(discard())),
|| Some(discard())),
}
}
}
fn unname(np: &NamedPattern) -> Pattern {
match np {
NamedPattern::Anonymous(p) => (**p).clone(),
NamedPattern::Named(b) => Pattern::SimplePattern(Box::new(b.pattern.clone())),
}
}
fn unname_simple(np: &NamedSimplePattern) -> &SimplePattern {
match np {
NamedSimplePattern::Anonymous(p) => p,
NamedSimplePattern::Named(b) => &b.pattern,
}
}

View File

@ -0,0 +1,47 @@
[package]
name = "syndicate-server"
version = "0.45.0"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
description = "Dataspace server."
homepage = "https://syndicate-lang.org/"
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
license = "Apache-2.0"
[features]
jemalloc = ["dep:tikv-jemallocator"]
[build-dependencies]
preserves-schema = "5.995"
syndicate = { path = "../syndicate", version = "0.40.0"}
syndicate-schema-plugin = { path = "../syndicate-schema-plugin", version = "0.9.0"}
[dependencies]
preserves-schema = "5.995"
syndicate = { path = "../syndicate", version = "0.40.0"}
syndicate-macros = { path = "../syndicate-macros", version = "0.32.0"}
chrono = "0.4"
futures = "0.3"
lazy_static = "1.4"
noise-protocol = "0.1"
noise-rust-crypto = "0.5"
notify = "4.0"
structopt = "0.3"
tikv-jemallocator = { version = "0.5.0", optional = true }
tokio = { version = "1.10", features = ["io-std", "time", "process"] }
tokio-util = "0.6"
tokio-stream = "0.1"
tracing = "0.1"
tracing-subscriber = "0.2"
tracing-futures = "0.2"
hyper = { version = "0.14.27", features = ["server", "http1", "stream"] }
hyper-tungstenite = "0.11.1"
parking_lot = "0.12.1"
[package.metadata.workspaces]
independent = true

19
syndicate-server/Makefile Normal file
View File

@ -0,0 +1,19 @@
all: binary-debug
# cargo install cargo-watch
watch:
cargo watch -c -x check -x 'test -- --nocapture'
run-watch:
RUST_BACKTRACE=1 cargo watch -c -x 'build --all-targets' -x 'run'
inotifytest:
inotifytest sh -c 'reset; cargo build && RUST_BACKTRACE=1 cargo test -- --nocapture'
binary: binary-release
binary-release:
cargo build --release --all-targets --features jemalloc
binary-debug:
cargo build --all-targets

32
syndicate-server/build.rs Normal file
View File

@ -0,0 +1,32 @@
use preserves_schema::compiler::*;
fn main() -> std::io::Result<()> {
let buildroot = std::path::PathBuf::from(std::env::var_os("OUT_DIR").unwrap());
let mut gen_dir = buildroot.clone();
gen_dir.push("src/schemas");
let mut c = CompilerConfig::new("crate::schemas".to_owned());
c.plugins.push(Box::new(syndicate_schema_plugin::PatternPlugin));
c.add_external_module(ExternalModule::new(vec!["EntityRef".to_owned()], "syndicate::actor"));
c.add_external_module(
ExternalModule::new(vec!["TransportAddress".to_owned()],
"syndicate::schemas::transport_address")
.set_fallback_language_types(
|v| vec![format!("syndicate::schemas::Language<{}>", v)].into_iter().collect()));
c.add_external_module(
ExternalModule::new(vec!["gatekeeper".to_owned()], "syndicate::schemas::gatekeeper")
.set_fallback_language_types(
|v| vec![format!("syndicate::schemas::Language<{}>", v)].into_iter().collect())
);
c.add_external_module(
ExternalModule::new(vec!["noise".to_owned()], "syndicate::schemas::noise")
.set_fallback_language_types(
|v| vec![format!("syndicate::schemas::Language<{}>", v)].into_iter().collect())
);
let inputs = expand_inputs(&vec!["protocols/schema-bundle.bin".to_owned()])?;
c.load_schemas_and_bundles(&inputs, &vec![])?;
c.load_xref_bin("syndicate", syndicate::schemas::_bundle())?;
compile(&c, &mut CodeCollector::files(gen_dir))
}

View File

@ -0,0 +1,55 @@
use std::sync::Arc;
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::language;
use syndicate::relay;
use syndicate::schemas::dataspace::Observe;
use syndicate::sturdy;
use syndicate::value::NestedValue;
use tokio::net::TcpStream;
use core::time::Duration;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
let config = Config::from_args();
let sturdyref = sturdy::SturdyRef::from_hex(&config.dataspace)?;
let (i, o) = TcpStream::connect("127.0.0.1:9001").await?.into_split();
Actor::top(None, |t| {
relay::connect_stream(t, i, o, false, sturdyref, (), |_state, t, ds| {
let consumer = syndicate::entity(0)
.on_message(|message_count, _t, m: AnyValue| {
if m.value().is_boolean() {
tracing::info!("{:?} messages in the last second", message_count);
*message_count = 0;
} else {
*message_count += 1;
}
Ok(())
})
.create_cap(t);
ds.assert(t, language(), &Observe {
pattern: syndicate_macros::pattern!{<Says $ $>},
observer: Arc::clone(&consumer),
});
t.every(Duration::from_secs(1), move |t| {
consumer.message(t, &(), &AnyValue::new(true));
Ok(())
})?;
Ok(None)
})
}).await??;
Ok(())
}

View File

@ -0,0 +1,96 @@
//! I am a low-level hack intended to consume bytes as quickly as
//! possible, so that the consumer isn't the bottleneck in
//! single-producer/single-consumer broker throughput measurement.
use preserves_schema::Codec;
use structopt::StructOpt;
use syndicate::schemas::Language;
use syndicate::schemas::protocol as P;
use syndicate::schemas::dataspace::Observe;
use syndicate::sturdy;
use syndicate::value::BinarySource;
use syndicate::value::BytesBinarySource;
use syndicate::value::IOValue;
use syndicate::value::PackedWriter;
use syndicate::value::Reader;
use std::io::Read;
use std::io::Write;
use std::net::TcpStream;
use std::time::Duration;
use std::time::Instant;
mod dirty;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::from_args();
let mut stream = TcpStream::connect("127.0.0.1:9001")?;
dirty::dirty_resolve(&mut stream, &config.dataspace)?;
let iolang = Language::<IOValue>::default();
{
let turn = P::Turn::<IOValue>(vec![
P::TurnEvent {
oid: P::Oid(1.into()),
event: P::Event::Assert(Box::new(P::Assert {
assertion: P::Assertion(iolang.unparse(&Observe {
pattern: syndicate_macros::pattern!{<Says $ $>},
observer: iolang.unparse(&sturdy::WireRef::Mine {
oid: Box::new(sturdy::Oid(2.into())),
}),
})),
handle: P::Handle(2.into()),
})),
}
]);
stream.write_all(&PackedWriter::encode_iovalue(&iolang.unparse(&turn))?)?;
}
let mut buf = [0; 131072];
let turn_size = {
let n = stream.read(&mut buf)?;
if n == 0 {
return Ok(());
}
let mut src = BytesBinarySource::new(&buf);
src.packed_iovalues().demand_next(false)?;
src.index
};
let mut start = Instant::now();
let interval = Duration::from_secs(1);
let mut deadline = start + interval;
let mut total_bytes = 0;
loop {
let n = stream.read(&mut buf)?;
if n == 0 {
break;
}
total_bytes += n;
let now = Instant::now();
if now >= deadline {
let delta = now - start;
let message_count = total_bytes as f64 / turn_size as f64;
println!("{} messages in the last second ({} Hz)",
message_count,
message_count / delta.as_secs_f64());
start = now;
total_bytes = 0;
deadline = deadline + interval;
}
}
Ok(())
}

View File

@ -0,0 +1,67 @@
//! I am a low-level hack intended to shovel bytes out the gate as
//! quickly as possible, so that the producer isn't the bottleneck in
//! single-producer/single-consumer broker throughput measurement.
use preserves_schema::Codec;
use structopt::StructOpt;
use syndicate::schemas::Language;
use syndicate::schemas::protocol as P;
use syndicate::value::IOValue;
use syndicate::value::PackedWriter;
use syndicate::value::Value;
use std::io::Write;
use std::net::TcpStream;
mod dirty;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "a", default_value = "1")]
action_count: u32,
#[structopt(short = "b", default_value = "0")]
bytes_padding: usize,
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
#[inline]
fn says(who: IOValue, what: IOValue) -> IOValue {
let mut r = Value::simple_record("Says", 2);
r.fields_vec_mut().push(who);
r.fields_vec_mut().push(what);
r.finish().wrap()
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::from_args();
let mut stream = TcpStream::connect("127.0.0.1:9001")?;
dirty::dirty_resolve(&mut stream, &config.dataspace)?;
let padding: IOValue = Value::ByteString(vec![0; config.bytes_padding]).wrap();
let mut events = Vec::new();
for _ in 0 .. config.action_count {
events.push(P::TurnEvent::<IOValue> {
oid: P::Oid(1.into()),
event: P::Event::Message(Box::new(P::Message {
body: P::Assertion(says(Value::from("producer").wrap(), padding.clone())),
})),
});
}
let turn = P::Turn(events);
let mut buf: Vec<u8> = vec![];
let iolang = Language::<IOValue>::default();
while buf.len() < 16384 {
buf.extend(&PackedWriter::encode_iovalue(&iolang.unparse(&turn))?);
}
loop {
stream.write_all(&buf)?;
}
}

View File

@ -0,0 +1,47 @@
use preserves_schema::Codec;
use syndicate::schemas::Language;
use syndicate::schemas::gatekeeper;
use syndicate::schemas::protocol as P;
use syndicate::sturdy;
use syndicate::value::IOValue;
use syndicate::value::NestedValue;
use syndicate::value::PackedWriter;
use std::io::Read;
use std::io::Write;
use std::net::TcpStream;
pub fn dirty_resolve(stream: &mut TcpStream, dataspace: &str) -> Result<(), Box<dyn std::error::Error>> {
let iolang = Language::<IOValue>::default();
let sturdyref = sturdy::SturdyRef::from_hex(dataspace)?;
let sturdyref = iolang.parse::<gatekeeper::Step<IOValue>>(
&syndicate::language().unparse(&sturdyref)
.copy_via(&mut |_| Err("no!"))?)?;
let resolve_turn = P::Turn(vec![
P::TurnEvent {
oid: P::Oid(0.into()),
event: P::Event::Assert(Box::new(P::Assert {
assertion: P::Assertion(iolang.unparse(&gatekeeper::Resolve::<IOValue> {
step: sturdyref,
observer: iolang.unparse(&sturdy::WireRef::Mine {
oid: Box::new(sturdy::Oid(0.into())),
}),
})),
handle: P::Handle(1.into()),
})),
}
]);
stream.write_all(&PackedWriter::encode_iovalue(&iolang.unparse(&resolve_turn))?)?;
{
let mut buf = [0; 1024];
stream.read(&mut buf)?;
// We just assume we got a positive response here!!
// We further assume that the resolved dataspace was assigned peer-oid 1
}
Ok(())
}

View File

@ -0,0 +1,205 @@
use std::sync::Arc;
use std::sync::Mutex;
use std::time::SystemTime;
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::language;
use syndicate::relay;
use syndicate::schemas::dataspace::Observe;
use syndicate::sturdy;
use syndicate::value::NestedValue;
use syndicate::value::Value;
use tokio::net::TcpStream;
use core::time::Duration;
#[derive(Clone, Debug, StructOpt)]
pub struct PingConfig {
#[structopt(short = "t", default_value = "1")]
turn_count: u32,
#[structopt(short = "a", default_value = "1")]
action_count: u32,
#[structopt(short = "l", default_value = "0")]
report_latency_every: usize,
#[structopt(short = "b", default_value = "0")]
bytes_padding: usize,
}
#[derive(Clone, Debug, StructOpt)]
pub enum PingPongMode {
Ping(PingConfig),
Pong,
}
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(subcommand)]
mode: PingPongMode,
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
fn now() -> u64 {
SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).expect("time after epoch").as_nanos() as u64
}
fn simple_record2(label: &str, v1: AnyValue, v2: AnyValue) -> AnyValue {
let mut r = Value::simple_record(label, 2);
r.fields_vec_mut().push(v1);
r.fields_vec_mut().push(v2);
r.finish().wrap()
}
fn report_latencies(rtt_ns_samples: &Vec<u64>) {
let n = rtt_ns_samples.len();
let rtt_0 = rtt_ns_samples[0];
let rtt_50 = rtt_ns_samples[n * 1 / 2];
let rtt_90 = rtt_ns_samples[n * 90 / 100];
let rtt_95 = rtt_ns_samples[n * 95 / 100];
let rtt_99 = rtt_ns_samples[n * 99 / 100];
let rtt_99_9 = rtt_ns_samples[n * 999 / 1000];
let rtt_99_99 = rtt_ns_samples[n * 9999 / 10000];
let rtt_max = rtt_ns_samples[n - 1];
println!("rtt: 0% {:05.5}ms, 50% {:05.5}ms, 90% {:05.5}ms, 95% {:05.5}ms, 99% {:05.5}ms, 99.9% {:05.5}ms, 99.99% {:05.5}ms, max {:05.5}ms",
rtt_0 as f64 / 1000000.0,
rtt_50 as f64 / 1000000.0,
rtt_90 as f64 / 1000000.0,
rtt_95 as f64 / 1000000.0,
rtt_99 as f64 / 1000000.0,
rtt_99_9 as f64 / 1000000.0,
rtt_99_99 as f64 / 1000000.0,
rtt_max as f64 / 1000000.0);
println!("msg: 0% {:05.5}ms, 50% {:05.5}ms, 90% {:05.5}ms, 95% {:05.5}ms, 99% {:05.5}ms, 99.9% {:05.5}ms, 99.99% {:05.5}ms, max {:05.5}ms",
rtt_0 as f64 / 2000000.0,
rtt_50 as f64 / 2000000.0,
rtt_90 as f64 / 2000000.0,
rtt_95 as f64 / 2000000.0,
rtt_99 as f64 / 2000000.0,
rtt_99_9 as f64 / 2000000.0,
rtt_99_99 as f64 / 2000000.0,
rtt_max as f64 / 2000000.0);
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
let config = Config::from_args();
let sturdyref = sturdy::SturdyRef::from_hex(&config.dataspace)?;
let (i, o) = TcpStream::connect("127.0.0.1:9001").await?.into_split();
Actor::top(None, |t| {
relay::connect_stream(t, i, o, false, sturdyref, (), move |_state, t, ds| {
let (send_label, recv_label, report_latency_every, should_echo, bytes_padding) =
match config.mode {
PingPongMode::Ping(ref c) =>
("Ping", "Pong", c.report_latency_every, false, c.bytes_padding),
PingPongMode::Pong =>
("Pong", "Ping", 0, true, 0),
};
let consumer = {
let ds = Arc::clone(&ds);
let mut turn_counter: u64 = 0;
let mut event_counter: u64 = 0;
let mut rtt_ns_samples: Vec<u64> = vec![0; report_latency_every];
let mut rtt_batch_count: usize = 0;
let current_reply = Arc::new(Mutex::new(None));
Cap::new(&t.create(
syndicate::entity(())
.on_message(move |(), t, m: AnyValue| {
match m.value().as_boolean() {
Some(_) => {
tracing::info!("{:?} turns, {:?} events in the last second",
turn_counter,
event_counter);
turn_counter = 0;
event_counter = 0;
}
None => {
event_counter += 1;
let bindings = m.value().to_sequence()?;
let timestamp = &bindings[0];
let padding = &bindings[1];
if should_echo || (report_latency_every == 0) {
ds.message(t, &(), &simple_record2(&send_label,
timestamp.clone(),
padding.clone()));
} else {
let mut g = current_reply.lock().expect("unpoisoned");
if let None = *g {
turn_counter += 1;
t.pre_commit(enclose!((current_reply) move |_| {
*current_reply.lock().expect("unpoisoned") = None;
Ok(())
}));
let rtt_ns = now() - timestamp.value().to_u64()?;
rtt_ns_samples[rtt_batch_count] = rtt_ns;
rtt_batch_count += 1;
if rtt_batch_count == report_latency_every {
rtt_ns_samples.sort();
report_latencies(&rtt_ns_samples);
rtt_batch_count = 0;
}
*g = Some(simple_record2(&send_label,
Value::from(now()).wrap(),
padding.clone()));
}
ds.message(t, &(), g.as_ref().expect("some reply"));
}
}
}
Ok(())
})))
};
ds.assert(t, language(), &Observe {
pattern: {
let recv_label = AnyValue::symbol(recv_label);
syndicate_macros::pattern!{<#(recv_label) $ $>}
},
observer: Arc::clone(&consumer),
});
t.every(Duration::from_secs(1), move |t| {
consumer.message(t, &(), &AnyValue::new(true));
Ok(())
})?;
if let PingPongMode::Ping(c) = &config.mode {
let facet = t.facet_ref();
let turn_count = c.turn_count;
let action_count = c.action_count;
let account = Arc::clone(t.account());
t.linked_task(Some(AnyValue::symbol("boot-ping")), async move {
let padding = AnyValue::bytestring(vec![0; bytes_padding]);
for _ in 0..turn_count {
let current_rec = simple_record2(send_label,
Value::from(now()).wrap(),
padding.clone());
facet.activate(&account, None, |t| {
for _ in 0..action_count {
ds.message(t, &(), &current_rec);
}
Ok(())
});
}
Ok(LinkedTaskTermination::KeepFacet)
});
}
Ok(None)
})
}).await??;
Ok(())
}

View File

@ -0,0 +1,52 @@
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::preserves::rec;
use syndicate::relay;
use syndicate::sturdy;
use syndicate::value::NestedValue;
use tokio::net::TcpStream;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "a", default_value = "1")]
action_count: u32,
#[structopt(short = "b", default_value = "0")]
bytes_padding: usize,
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
let config = Config::from_args();
let sturdyref = sturdy::SturdyRef::from_hex(&config.dataspace)?;
let (i, o) = TcpStream::connect("127.0.0.1:9001").await?.into_split();
Actor::top(None, |t| {
relay::connect_stream(t, i, o, false, sturdyref, (), move |_state, t, ds| {
let facet = t.facet_ref();
let padding = AnyValue::new(&vec![0u8; config.bytes_padding][..]);
let action_count = config.action_count;
let account = Account::new(None, None);
t.linked_task(Some(AnyValue::symbol("sender")), async move {
loop {
account.ensure_clear_funds().await;
facet.activate(&account, None, |t| {
for _ in 0..action_count {
ds.message(t, &(), &rec![AnyValue::symbol("Says"),
AnyValue::new("producer"),
padding.clone()]);
}
Ok(())
});
}
});
Ok(None)
})
}).await??;
Ok(())
}

View File

@ -0,0 +1,76 @@
use std::sync::Arc;
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::language;
use syndicate::relay;
use syndicate::schemas::dataspace::Observe;
use syndicate::sturdy;
use syndicate::value::NestedValue;
use tokio::net::TcpStream;
use core::time::Duration;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
let config = Config::from_args();
let sturdyref = sturdy::SturdyRef::from_hex(&config.dataspace)?;
let (i, o) = TcpStream::connect("127.0.0.1:9001").await?.into_split();
Actor::top(None, |t| {
relay::connect_stream(t, i, o, false, sturdyref, (), |_state, t, ds| {
let consumer = {
#[derive(Default)]
struct State {
event_counter: u64,
arrival_counter: u64,
departure_counter: u64,
occupancy: u64,
}
syndicate::entity(State::default()).on_asserted(move |s, _, _| {
s.event_counter += 1;
s.arrival_counter += 1;
s.occupancy += 1;
Ok(Some(Box::new(|s, _| {
s.event_counter += 1;
s.departure_counter += 1;
s.occupancy -= 1;
Ok(())
})))
}).on_message(move |s, _, _| {
tracing::info!(
"{:?} events, {:?} arrivals, {:?} departures, {:?} present in the last second",
s.event_counter,
s.arrival_counter,
s.departure_counter,
s.occupancy);
s.event_counter = 0;
s.arrival_counter = 0;
s.departure_counter = 0;
Ok(())
}).create_cap(t)
};
ds.assert(t, language(), &Observe {
pattern: syndicate_macros::pattern!{<Present $>},
observer: Arc::clone(&consumer),
});
t.every(Duration::from_secs(1), move |t| {
consumer.message(t, &(), &AnyValue::new(true));
Ok(())
})?;
Ok(None)
})
}).await??;
Ok(())
}

View File

@ -0,0 +1,48 @@
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::preserves::rec;
use syndicate::relay;
use syndicate::sturdy;
use syndicate::value::NestedValue;
use tokio::net::TcpStream;
#[derive(Clone, Debug, StructOpt)]
pub struct Config {
#[structopt(short = "d", default_value = "b4b303726566b7b3036f6964b10973796e646963617465b303736967b21069ca300c1dbfa08fba692102dd82311a8484")]
dataspace: String,
}
#[tokio::main]
async fn main() -> ActorResult {
syndicate::convenient_logging()?;
let config = Config::from_args();
let sturdyref = sturdy::SturdyRef::from_hex(&config.dataspace)?;
let (i, o) = TcpStream::connect("127.0.0.1:9001").await?.into_split();
Actor::top(None, |t| {
relay::connect_stream(t, i, o, false, sturdyref, (), move |_state, t, ds| {
let facet = t.facet_ref();
let account = Account::new(None, None);
t.linked_task(Some(AnyValue::symbol("sender")), async move {
let presence = rec![AnyValue::symbol("Present"), AnyValue::new(std::process::id())];
loop {
let mut handle = None;
facet.activate(&account, None, |t| {
handle = ds.assert(t, &(), &presence);
Ok(())
});
account.ensure_clear_funds().await;
facet.activate(&account, None, |t| {
if let Some(h) = handle {
t.retract(h);
}
Ok(())
});
}
});
Ok(None)
})
}).await??;
Ok(())
}

View File

@ -0,0 +1,9 @@
// tracing::info!(r" {} __{}__{}__ {}", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// tracing::info!(r" {} /{}_/ \_{}\ {}", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// tracing::info!(r" {} / \__/ \ {} __ __", BRIGHT_GREEN, NORMAL);
// tracing::info!(r" {}/{}\__/ \__/{}\{} _______ ______ ____/ /__________ / /____", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// tracing::info!(r" {}\{}/ \__/ \{}/{} / ___/ / / / __ \/ __ / / ___/ __ \/ __/ _ \", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// tracing::info!(r" {} \__/ \__/ {} _\_ \/ /_/ / / / / /_/ / / /__/ /_/ / /_/ __/", BRIGHT_GREEN, NORMAL);
// tracing::info!(r" {} \_{}\__/{}_/ {} /____/\__, /_/ /_/\____/_/\___/\__/_/\__/\___/", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
// tracing::info!(r" /____/");

View File

@ -0,0 +1,8 @@
all: schema-bundle.bin
clean:
rm -f schema-bundle.bin
schema-bundle.bin: schemas/*.prs
preserves-schemac schemas > $@.tmp
mv $@.tmp $@

View File

@ -0,0 +1,17 @@
´³bundle·µ³control„´³schema·³version°³ definitions·³
ExitServer´³rec´³lit³exit„´³tupleµ´³named³code´³atom³ SignedInteger„„„„„„³ embeddedType€„„µ³ documentation„´³schema·³version°³ definitions·³Url´³orµµ±present´³dict·³url´³named³url´³atom³String„„„„„µ±invalid´³dict·³url´³named³url³any„„„„µ±absent´³dict·„„„„„³IOList´³orµµ±bytes´³atom³
ByteString„„µ±string´³atom³String„„µ±nested´³seqof´³refµ„³IOList„„„„„³Metadata´³rec´³lit³metadata„´³tupleµ´³named³object³any„´³named³info´³dictof´³atom³Symbol„³any„„„„„³ Description´³orµµ±present´³dict·³ description´³named³ description´³refµ„³IOList„„„„„µ±invalid´³dict·³ description´³named³ description³any„„„„µ±absent´³dict·„„„„„„³ embeddedType€„„µ³externalServices„´³schema·³version°³ definitions·³Process´³orµµ±simple´³refµ„³ CommandLine„„µ±full´³refµ„³ FullProcess„„„„³Service´³refµ„³ DaemonService„³ClearEnv´³orµµ±present´³dict·³clearEnv´³named³clearEnv´³atom³Boolean„„„„„µ±invalid´³dict·³clearEnv´³named³clearEnv³any„„„„µ±absent´³dict·„„„„„³EnvValue´³orµµ±set´³atom³String„„µ±remove´³lit€„„µ±invalid³any„„„³Protocol´³orµµ±none´³lit³none„„µ±binarySyndicate´³lit³application/syndicate„„µ± textSyndicate´³lit³text/syndicate„„„„³
ProcessDir´³orµµ±present´³dict·³dir´³named³dir´³atom³String„„„„„µ±invalid´³dict·³dir´³named³dir³any„„„„µ±absent´³dict·„„„„„³
ProcessEnv´³orµµ±present´³dict·³env´³named³env´³dictof´³refµ„³ EnvVariable„´³refµ„³EnvValue„„„„„„µ±invalid´³dict·³env´³named³env³any„„„„µ±absent´³dict·„„„„„³ CommandLine´³orµµ±shell´³atom³String„„µ±full´³refµ„³FullCommandLine„„„„³ EnvVariable´³orµµ±string´³atom³String„„µ±symbol´³atom³Symbol„„µ±invalid³any„„„³ FullProcess´³andµ´³dict·³argv´³named³argv´³refµ„³ CommandLine„„„„´³named³env´³refµ„³
ProcessEnv„„´³named³dir´³refµ„³
ProcessDir„„´³named³clearEnv´³refµ„³ClearEnv„„„„³ ReadyOnStart´³orµµ±present´³dict·³ readyOnStart´³named³ readyOnStart´³atom³Boolean„„„„„µ±invalid´³dict·³ readyOnStart´³named³ readyOnStart³any„„„„µ±absent´³dict·„„„„„³ RestartField´³orµµ±present´³dict·³restart´³named³restart´³refµ„³ RestartPolicy„„„„„µ±invalid´³dict·³restart´³named³restart³any„„„„µ±absent´³dict·„„„„„³ DaemonProcess´³rec´³lit³daemon„´³tupleµ´³named³id³any„´³named³config´³refµ„³DaemonProcessSpec„„„„„³ DaemonService´³rec´³lit³daemon„´³tupleµ´³named³id³any„„„„³ ProtocolField´³orµµ±present´³dict·³protocol´³named³protocol´³refµ„³Protocol„„„„„µ±invalid´³dict·³protocol´³named³protocol³any„„„„µ±absent´³dict·„„„„„³ RestartPolicy´³orµµ±always´³lit³always„„µ±onError´³lit³on-error„„µ±all´³lit³all„„µ±never´³lit³never„„„„³FullCommandLine´³ tuplePrefixµ´³named³program´³atom³String„„„´³named³args´³seqof´³atom³String„„„„³DaemonProcessSpec´³orµµ±simple´³refµ„³ CommandLine„„µ±oneShot´³rec´³lit³one-shot„´³tupleµ´³named³setup´³refµ„³ CommandLine„„„„„„µ±full´³refµ„³FullDaemonProcess„„„„³FullDaemonProcess´³andµ´³named³process´³refµ„³ FullProcess„„´³named³ readyOnStart´³refµ„³ ReadyOnStart„„´³named³restart´³refµ„³ RestartField„„´³named³protocol´³refµ„³ ProtocolField„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³internalServices„´³schema·³version°³ definitions·³ ConfigEnv´³dictof´³atom³Symbol„³any„³
Gatekeeper´³rec´³lit³
gatekeeper„´³tupleµ´³named³ bindspace´³embedded´³refµ³
gatekeeper„³Bind„„„„„„³
HttpRouter´³rec´³lit³ http-router„´³tupleµ´³named³httpd´³embedded³any„„„„„³ TcpWithHttp´³rec´³lit³relay-listener„´³tupleµ´³named³addr´³refµ³TransportAddress„³Tcp„„´³named³
gatekeeper´³embedded´³refµ³
gatekeeper„³Resolve„„„´³named³httpd´³embedded´³refµ³http„³ HttpContext„„„„„„³ DebtReporter´³rec´³lit³ debt-reporter„´³tupleµ´³named³intervalSeconds´³atom³Double„„„„„³ ConfigWatcher´³rec´³lit³config-watcher„´³tupleµ´³named³path´³atom³String„„´³named³env´³refµ„³ ConfigEnv„„„„„³TcpWithoutHttp´³rec´³lit³relay-listener„´³tupleµ´³named³addr´³refµ³TransportAddress„³Tcp„„´³named³
gatekeeper´³embedded´³refµ³
gatekeeper„³Resolve„„„„„„³TcpRelayListener´³orµµ±TcpWithoutHttp´³refµ„³TcpWithoutHttp„„µ± TcpWithHttp´³refµ„³ TcpWithHttp„„„„³UnixRelayListener´³rec´³lit³relay-listener„´³tupleµ´³named³addr´³refµ³TransportAddress„³Unix„„´³named³
gatekeeper´³embedded´³refµ³
gatekeeper„³Resolve„„„„„„³HttpStaticFileServer´³rec´³lit³http-static-files„´³tupleµ´³named³dir´³atom³String„„´³named³pathPrefixElements´³atom³ SignedInteger„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„„„

View File

@ -0,0 +1,12 @@
version 1 .
# Messages and assertions relating to the `$control` entity enabled in syndicate-server when
# the `--control` flag is supplied.
#
# For example, placing the following into `control-config.pr` and starting the server with
# `syndicate-server --control -c control-config.pr` will result in the server exiting with
# exit code 2:
#
# $control ! <exit 2>
ExitServer = <exit @code int> .

View File

@ -0,0 +1,11 @@
version 1 .
# Assertion. Describes `object`.
Metadata = <metadata @object any @info { symbol: any ...:... }> .
# Projections of the `info` in a `Metadata` record.
Description = @present { description: IOList } / @invalid { description: any } / @absent {} .
Url = @present { url: string } / @invalid { url: any } / @absent {} .
# Data type. From preserves' `conventions.md`.
IOList = @bytes bytes / @string string / @nested [IOList ...] .

View File

@ -0,0 +1,55 @@
version 1 .
embeddedType EntityRef.Cap .
Service = DaemonService .
DaemonService = <daemon @id any> .
DaemonProcess = <daemon @id any @config DaemonProcessSpec>.
DaemonProcessSpec = @simple CommandLine / @oneShot <one-shot @setup CommandLine> / @full FullDaemonProcess .
FullDaemonProcess = @process FullProcess & @readyOnStart ReadyOnStart & @restart RestartField & @protocol ProtocolField .
ReadyOnStart = @present { readyOnStart: bool } / @invalid { readyOnStart: any } / @absent {} .
RestartField = @present { restart: RestartPolicy } / @invalid { restart: any } / @absent {} .
ProtocolField = @present { protocol: Protocol } / @invalid { protocol: any } / @absent {} .
Process = @simple CommandLine / @full FullProcess .
FullProcess =
& { argv: CommandLine }
& @env ProcessEnv
& @dir ProcessDir
& @clearEnv ClearEnv
.
ProcessEnv = @present { env: { EnvVariable: EnvValue ...:... } } / @invalid { env: any } / @absent {} .
ProcessDir = @present { dir: string } / @invalid { dir: any } / @absent {} .
ClearEnv = @present { clearEnv: bool } / @invalid { clearEnv: any } / @absent {} .
CommandLine = @shell string / @full FullCommandLine .
FullCommandLine = [@program string, @args string ...] .
EnvVariable = @string string / @symbol symbol / @invalid any .
EnvValue = @set string / @remove #f / @invalid any .
RestartPolicy =
/ # Whether the process terminates normally or abnormally, restart it
# without affecting any peer processes within the service.
=always
/ # If the process terminates normally, leave everything alone; if it
# terminates abnormally, restart it without affecting peers.
@onError =on-error
/ # If the process terminates normally, leave everything alone; if it
# terminates abnormally, restart the whole daemon (all processes
# within the daemon).
=all
/ # Treat both normal and abnormal termination as normal termination; that is, never restart,
# and enter state "complete" even if the process fails.
=never
.
Protocol =
/ # stdin is /dev/null, output and error are logged
=none
/ # stdin and stdout are *binary* Syndicate-protocol channels
@binarySyndicate =application/syndicate
/ # stdin and stdout are *text* Syndicate-protocol channels
@textSyndicate =text/syndicate
.

View File

@ -0,0 +1,18 @@
version 1 .
embeddedType EntityRef.Cap .
Gatekeeper = <gatekeeper @bindspace #:gatekeeper.Bind> .
DebtReporter = <debt-reporter @intervalSeconds double>.
TcpRelayListener = TcpWithoutHttp / TcpWithHttp .
TcpWithoutHttp = <relay-listener @addr TransportAddress.Tcp @gatekeeper #:gatekeeper.Resolve> .
TcpWithHttp = <relay-listener @addr TransportAddress.Tcp @gatekeeper #:gatekeeper.Resolve @httpd #:http.HttpContext> .
UnixRelayListener = <relay-listener @addr TransportAddress.Unix @gatekeeper #:gatekeeper.Resolve> .
ConfigWatcher = <config-watcher @path string @env ConfigEnv>.
ConfigEnv = { symbol: any ...:... }.
HttpRouter = <http-router @httpd #:any> .
HttpStaticFileServer = <http-static-files @dir string @pathPrefixElements int> .

View File

@ -0,0 +1,27 @@
use std::sync::Arc;
use syndicate::actor::*;
pub fn adjust(t: &mut Activation, f: &Arc<Field<isize>>, delta: isize) {
let f = f.clone();
tracing::trace!(?f, v0 = ?t.get(&f), "adjust");
*t.get_mut(&f) += delta;
tracing::trace!(?f, v1 = ?t.get(&f), "adjust");
t.on_stop(move |t| {
tracing::trace!(?f, v0 = ?t.get(&f), "cleanup");
*t.get_mut(&f) -= delta;
tracing::trace!(?f, v1 = ?t.get(&f), "cleanup");
Ok(())
});
}
pub fn sync_and_adjust<M: 'static + Send>(t: &mut Activation, r: &Arc<Ref<M>>, f: &Arc<Field<isize>>, delta: isize) {
let f = f.clone();
let sync_handler = t.create(move |t: &mut Activation| {
tracing::trace!(?f, v0 = ?t.get(&f), "sync");
*t.get_mut(&f) += delta;
tracing::trace!(?f, v1 = ?t.get(&f), "sync");
Ok(())
});
t.sync(r, sync_handler)
}

View File

@ -0,0 +1,76 @@
use preserves_schema::Codec;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::preserves::rec;
use syndicate::schemas::service;
use syndicate::value::NestedValue;
use crate::counter;
use crate::language::language;
use syndicate_macros::during;
pub fn boot(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("dependencies_listener")), move |t| {
Ok(during!(t, ds, language(), <require-service $spec>, |t: &mut Activation| {
tracing::debug!(?spec, "tracking dependencies");
t.spawn_link(Some(rec![AnyValue::symbol("dependencies"), language().unparse(&spec)]),
enclose!((ds) |t| run(t, ds, spec)));
Ok(())
}))
});
}
fn run(t: &mut Activation, ds: Arc<Cap>, service_name: AnyValue) -> ActorResult {
let obstacle_count = t.named_field("obstacle_count", 1isize);
t.dataflow(enclose!((service_name, obstacle_count) move |t| {
tracing::trace!(?service_name, obstacle_count = ?t.get(&obstacle_count));
Ok(())
}))?;
t.dataflow({
let mut handle = None;
enclose!((ds, obstacle_count, service_name) move |t| {
let obstacle_count = *t.get(&obstacle_count);
if obstacle_count == 0 {
ds.update(t, &mut handle, language(), Some(&service::RunService {
service_name: service_name.clone(),
}));
} else {
ds.update::<_, service::RunService>(t, &mut handle, language(), None);
}
Ok(())
})
})?;
let depender = service_name.clone();
enclose!((ds, obstacle_count) during!(
t, ds, language(), <depends-on #(&depender) $dependee>,
enclose!((service_name, ds, obstacle_count) move |t: &mut Activation| {
if let Ok(dependee) = language().parse::<service::ServiceState>(&dependee) {
tracing::trace!(?service_name, ?dependee, "new dependency");
ds.assert(t, language(), &service::RequireService {
service_name: dependee.service_name,
});
} else {
tracing::warn!(?service_name, ?dependee, "cannot deduce dependee service name");
}
counter::adjust(t, &obstacle_count, 1);
let d = &dependee.clone();
during!(t, ds, language(), #d, enclose!(
(service_name, obstacle_count, dependee) move |t: &mut Activation| {
tracing::trace!(?service_name, ?dependee, "dependency satisfied");
counter::adjust(t, &obstacle_count, -1);
Ok(())
}));
Ok(())
})));
counter::sync_and_adjust(t, &ds.underlying, &obstacle_count, -1);
Ok(())
}

View File

@ -0,0 +1,500 @@
use noise_protocol::CipherState;
use noise_protocol::U8Array;
use noise_protocol::patterns::HandshakePattern;
use noise_rust_crypto::Blake2s;
use noise_rust_crypto::ChaCha20Poly1305;
use noise_rust_crypto::X25519;
use preserves_schema::Codec;
use syndicate::relay::Mutex;
use syndicate::relay::TunnelRelay;
use syndicate::trace::TurnCause;
use syndicate::value::NoEmbeddedDomainCodec;
use syndicate::value::packed::PackedWriter;
use std::convert::TryInto;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::value::NestedValue;
use syndicate::schemas::dataspace;
use syndicate::schemas::gatekeeper;
use syndicate::schemas::noise;
use syndicate::schemas::sturdy;
use crate::language::language;
use syndicate_macros::during;
use syndicate_macros::pattern;
fn sturdy_step_type() -> String {
language().unparse(&sturdy::SturdyStepType).value().to_symbol().unwrap().clone()
}
fn noise_step_type() -> String {
language().unparse(&noise::NoiseStepType).value().to_symbol().unwrap().clone()
}
pub fn handle_binds(t: &mut Activation, ds: &Arc<Cap>) -> ActorResult {
during!(t, ds, language(), <bind <ref $desc> $target $observer>, |t: &mut Activation| {
t.spawn_link(None, move |t| {
target.value().to_embedded()?;
let observer = language().parse::<gatekeeper::BindObserver>(&observer)?;
let desc = language().parse::<sturdy::SturdyDescriptionDetail>(&desc)?;
let sr = sturdy::SturdyRef::mint(desc.oid, &desc.key);
if let gatekeeper::BindObserver::Present(o) = observer {
o.assert(t, language(), &gatekeeper::Bound::Bound {
path_step: Box::new(gatekeeper::PathStep {
step_type: sturdy_step_type(),
detail: language().unparse(&sr.parameters),
}),
});
}
Ok(())
});
Ok(())
});
during!(t, ds, language(), <bind <noise $desc> $target $observer>, |t: &mut Activation| {
t.spawn_link(None, move |t| {
target.value().to_embedded()?;
let observer = language().parse::<gatekeeper::BindObserver>(&observer)?;
let spec = language().parse::<noise::NoiseDescriptionDetail<AnyValue>>(&desc)?.0;
match validate_noise_spec(spec) {
Ok(spec) => if let gatekeeper::BindObserver::Present(o) = observer {
o.assert(t, language(), &gatekeeper::Bound::Bound {
path_step: Box::new(gatekeeper::PathStep {
step_type: noise_step_type(),
detail: language().unparse(&noise::NoisePathStepDetail(noise::NoiseSpec {
key: spec.public_key,
service: noise::ServiceSelector(spec.service),
protocol: if spec.protocol == default_noise_protocol() {
noise::NoiseProtocol::Absent
} else {
noise::NoiseProtocol::Present {
protocol: spec.protocol,
}
},
pre_shared_keys: if spec.psks.is_empty() {
noise::NoisePreSharedKeys::Absent
} else {
noise::NoisePreSharedKeys::Present {
pre_shared_keys: spec.psks,
}
},
})),
}),
});
},
Err(e) => {
if let gatekeeper::BindObserver::Present(o) = observer {
o.assert(t, language(), &gatekeeper::Bound::Rejected(
Box::new(gatekeeper::Rejected {
detail: AnyValue::new(format!("{}", &e)),
})));
}
tracing::error!("Invalid noise bind description: {}", e);
}
}
Ok(())
});
Ok(())
});
Ok(())
}
pub fn facet_handle_resolve(
ds: &mut Arc<Cap>,
t: &mut Activation,
a: gatekeeper::Resolve,
) -> ActorResult {
let mut detail: &'static str = "unsupported";
if a.step.step_type == sturdy_step_type() {
detail = "invalid";
if let Ok(s) = language().parse::<sturdy::SturdyStepDetail>(&a.step.detail) {
t.facet(|t| {
let f = handle_direct_resolution(ds, t, a.clone())?;
await_bind_sturdyref(ds, t, sturdy::SturdyRef { parameters: s.0 }, a.observer, f)
})?;
return Ok(());
}
}
if a.step.step_type == noise_step_type() {
detail = "invalid";
if let Ok(s) = language().parse::<noise::NoiseStepDetail<AnyValue>>(&a.step.detail) {
t.facet(|t| {
let f = handle_direct_resolution(ds, t, a.clone())?;
await_bind_noise(ds, t, s.0.0, a.observer, f)
})?;
return Ok(());
}
}
a.observer.assert(t, language(), &gatekeeper::Rejected {
detail: AnyValue::symbol(detail),
});
Ok(())
}
fn handle_direct_resolution(
ds: &mut Arc<Cap>,
t: &mut Activation,
a: gatekeeper::Resolve,
) -> Result<FacetId, ActorError> {
let outer_facet = t.facet_id();
t.facet(move |t| {
let handler = syndicate::entity(a.observer)
.on_asserted(move |observer, t, a: AnyValue| {
t.stop_facet_and_continue(outer_facet, Some(
enclose!((observer, a) move |t: &mut Activation| {
observer.assert(t, language(), &a);
Ok(())
})))?;
Ok(None)
})
.create_cap(t);
ds.assert(t, language(), &gatekeeper::Resolve {
step: a.step.clone(),
observer: handler,
});
Ok(())
})
}
fn await_bind_sturdyref(
ds: &mut Arc<Cap>,
t: &mut Activation,
sturdyref: sturdy::SturdyRef,
observer: Arc<Cap>,
direct_resolution_facet: FacetId,
) -> ActorResult {
let queried_oid = sturdyref.parameters.oid.clone();
let handler = syndicate::entity(observer)
.on_asserted(move |observer, t, a: AnyValue| {
t.stop_facet(direct_resolution_facet);
let bindings = a.value().to_sequence()?;
let key = bindings[0].value().to_bytestring()?;
let unattenuated_target = bindings[1].value().to_embedded()?;
match sturdyref.validate_and_attenuate(key, unattenuated_target) {
Err(e) => {
tracing::warn!(sturdyref = ?language().unparse(&sturdyref),
"sturdyref failed validation: {}", e);
observer.assert(t, language(), &gatekeeper::Resolved::Rejected(
Box::new(gatekeeper::Rejected {
detail: AnyValue::symbol("sturdyref-failed-validation"),
})));
},
Ok(target) => {
tracing::trace!(sturdyref = ?language().unparse(&sturdyref),
?target,
"sturdyref resolved");
observer.assert(t, language(), &gatekeeper::Resolved::Accepted {
responder_session: target,
});
}
}
Ok(None)
})
.create_cap(t);
ds.assert(t, language(), &dataspace::Observe {
// TODO: codegen plugin to generate pattern constructors
pattern: pattern!{<bind <ref { oid: #(&queried_oid), key: $ }> $ _>},
observer: handler,
});
Ok(())
}
struct ValidatedNoiseSpec {
service: AnyValue,
protocol: String,
pattern: HandshakePattern,
psks: Vec<Vec<u8>>,
secret_key: Option<Vec<u8>>,
public_key: Vec<u8>,
}
fn default_noise_protocol() -> String {
language().unparse(&noise::DefaultProtocol).value().to_string().unwrap().clone()
}
fn validate_noise_spec(
spec: noise::NoiseServiceSpec<AnyValue>,
) -> Result<ValidatedNoiseSpec, ActorError> {
let protocol = match spec.base.protocol {
noise::NoiseProtocol::Present { protocol } => protocol,
noise::NoiseProtocol::Invalid { protocol } =>
Err(format!("Invalid noise protocol {:?}", protocol))?,
noise::NoiseProtocol::Absent => default_noise_protocol(),
};
const PREFIX: &'static str = "Noise_";
const SUFFIX: &'static str = "_25519_ChaChaPoly_BLAKE2s";
if !protocol.starts_with(PREFIX) || !protocol.ends_with(SUFFIX) {
Err(format!("Unsupported protocol {:?}", protocol))?;
}
let pattern_name = &protocol[PREFIX.len()..(protocol.len()-SUFFIX.len())];
let pattern = lookup_pattern(pattern_name).ok_or_else::<ActorError, _>(
|| format!("Unsupported handshake pattern {:?}", pattern_name).into())?;
let psks = match spec.base.pre_shared_keys {
noise::NoisePreSharedKeys::Present { pre_shared_keys } => pre_shared_keys,
noise::NoisePreSharedKeys::Invalid { pre_shared_keys } =>
Err(format!("Invalid pre-shared-keys {:?}", pre_shared_keys))?,
noise::NoisePreSharedKeys::Absent => vec![],
};
let secret_key = match spec.secret_key {
noise::SecretKeyField::Present { secret_key } => Some(secret_key),
noise::SecretKeyField::Invalid { secret_key } =>
Err(format!("Invalid secret key {:?}", secret_key))?,
noise::SecretKeyField::Absent => None,
};
Ok(ValidatedNoiseSpec {
service: spec.base.service.0,
protocol,
pattern,
psks,
secret_key,
public_key: spec.base.key,
})
}
fn await_bind_noise(
ds: &mut Arc<Cap>,
t: &mut Activation,
service_selector: AnyValue,
observer: Arc<Cap>,
direct_resolution_facet: FacetId,
) -> ActorResult {
let handler = syndicate::entity(())
.on_asserted_facet(move |_state, t, a: AnyValue| {
t.stop_facet(direct_resolution_facet);
let observer = Arc::clone(&observer);
t.spawn_link(None, move |t| {
let bindings = a.value().to_sequence()?;
let spec = validate_noise_spec(language().parse(&bindings[0])?)?;
let service = bindings[1].value().to_embedded()?;
run_noise_responder(t, spec, observer, Arc::clone(service))
});
Ok(())
})
.create_cap(t);
ds.assert(t, language(), &dataspace::Observe {
// TODO: codegen plugin to generate pattern constructors
pattern: pattern!{
<bind <noise $spec:NoiseServiceSpec{ { service: #(&service_selector) } }> $service _>
},
observer: handler,
});
Ok(())
}
type HandshakeState = noise_protocol::HandshakeState<X25519, ChaCha20Poly1305, Blake2s>;
enum ResponderState {
Invalid, // used during state transitions
Introduction {
service: Arc<Cap>,
hs: HandshakeState,
},
Handshake {
initiator_session: Arc<Cap>,
service: Arc<Cap>,
hs: HandshakeState,
},
Transport {
relay_input: Arc<Mutex<Option<TunnelRelay>>>,
c_recv: CipherState<ChaCha20Poly1305>,
},
}
impl Entity<noise::SessionItem> for ResponderState {
fn assert(&mut self, _t: &mut Activation, item: noise::SessionItem, _handle: Handle) -> ActorResult {
let initiator_session = match item {
noise::SessionItem::Initiator(i_box) => i_box.initiator_session,
noise::SessionItem::Packet(_) => Err("Unexpected Packet assertion")?,
};
match std::mem::replace(self, ResponderState::Invalid) {
ResponderState::Introduction { service, hs } => {
*self = ResponderState::Handshake { initiator_session, service, hs };
Ok(())
}
_ =>
Err("Received second Initiator")?,
}
}
fn message(&mut self, t: &mut Activation, item: noise::SessionItem) -> ActorResult {
let p = match item {
noise::SessionItem::Initiator(_) => Err("Unexpected Initiator message")?,
noise::SessionItem::Packet(p_box) => *p_box,
};
match self {
ResponderState::Invalid | ResponderState::Introduction { .. } =>
Err("Received Packet in invalid ResponderState")?,
ResponderState::Handshake { initiator_session, service, hs } => match p {
noise::Packet::Complete(bs) => {
if bs.len() < hs.get_next_message_overhead() {
Err("Invalid handshake message for pattern")?;
}
if bs.len() > hs.get_next_message_overhead() {
Err("Cannot accept payload during handshake")?;
}
hs.read_message(&bs, &mut [])?;
let mut reply = vec![0u8; hs.get_next_message_overhead()];
hs.write_message(&[], &mut reply[..])?;
initiator_session.message(t, language(), &noise::Packet::Complete(reply.into()));
if hs.completed() {
let (c_recv, mut c_send) = hs.get_ciphers();
let (_, relay_input, mut relay_output) =
TunnelRelay::_run(t, Some(Arc::clone(service)), None, false);
let trace_collector = t.trace_collector();
let initiator_session = Arc::clone(initiator_session);
let relay_output_name = Some(AnyValue::symbol("relay_output"));
let transport_facet = t.facet_ref();
t.linked_task(relay_output_name.clone(), async move {
let account = Account::new(relay_output_name, trace_collector);
let cause = TurnCause::external("relay_output");
loop {
match relay_output.recv().await {
None => return Ok(LinkedTaskTermination::KeepFacet),
Some(loaned_item) => {
const MAXSIZE: usize = 65535 - 16; /* Noise tag length is 16 */
let p = if loaned_item.item.len() > MAXSIZE {
noise::Packet::Fragmented(
loaned_item.item
.chunks(MAXSIZE)
.map(|c| c_send.encrypt_vec(c))
.collect())
} else {
noise::Packet::Complete(c_send.encrypt_vec(&loaned_item.item))
};
if !transport_facet.activate(&account, Some(cause.clone()), |t| {
initiator_session.message(t, language(), &p);
Ok(())
}) {
break;
}
}
}
}
Ok(LinkedTaskTermination::Normal)
});
*self = ResponderState::Transport { relay_input, c_recv };
}
}
_ => Err("Fragmented handshake is not allowed")?,
},
ResponderState::Transport { relay_input, c_recv } => {
let bs = match p {
noise::Packet::Complete(bs) =>
c_recv.decrypt_vec(&bs[..]).map_err(|_| "Cannot decrypt packet")?,
noise::Packet::Fragmented(pieces) => {
let mut result = Vec::with_capacity(1024);
for piece in pieces {
result.extend(c_recv.decrypt_vec(&piece[..])
.map_err(|_| "Cannot decrypt packet fragment")?);
}
result
}
};
let mut g = relay_input.lock();
let tr = g.as_mut().expect("initialized");
tr.handle_inbound_datagram(t, &bs[..])?;
}
}
Ok(())
}
}
fn lookup_pattern(name: &str) -> Option<HandshakePattern> {
use noise_protocol::patterns::*;
Some(match name {
"N" => noise_n(),
"K" => noise_k(),
"X" => noise_x(),
"NN" => noise_nn(),
"NK" => noise_nk(),
"NX" => noise_nx(),
"XN" => noise_xn(),
"XK" => noise_xk(),
"XX" => noise_xx(),
"KN" => noise_kn(),
"KK" => noise_kk(),
"KX" => noise_kx(),
"IN" => noise_in(),
"IK" => noise_ik(),
"IX" => noise_ix(),
"Npsk0" => noise_n_psk0(),
"Kpsk0" => noise_k_psk0(),
"Xpsk1" => noise_x_psk1(),
"NNpsk0" => noise_nn_psk0(),
"NNpsk2" => noise_nn_psk2(),
"NKpsk0" => noise_nk_psk0(),
"NKpsk2" => noise_nk_psk2(),
"NXpsk2" => noise_nx_psk2(),
"XNpsk3" => noise_xn_psk3(),
"XKpsk3" => noise_xk_psk3(),
"XXpsk3" => noise_xx_psk3(),
"KNpsk0" => noise_kn_psk0(),
"KNpsk2" => noise_kn_psk2(),
"KKpsk0" => noise_kk_psk0(),
"KKpsk2" => noise_kk_psk2(),
"KXpsk2" => noise_kx_psk2(),
"INpsk1" => noise_in_psk1(),
"INpsk2" => noise_in_psk2(),
"IKpsk1" => noise_ik_psk1(),
"IKpsk2" => noise_ik_psk2(),
"IXpsk2" => noise_ix_psk2(),
"NNpsk0+psk2" => noise_nn_psk0_psk2(),
"NXpsk0+psk1+psk2" => noise_nx_psk0_psk1_psk2(),
"XNpsk1+psk3" => noise_xn_psk1_psk3(),
"XKpsk0+psk3" => noise_xk_psk0_psk3(),
"KNpsk1+psk2" => noise_kn_psk1_psk2(),
"KKpsk0+psk2" => noise_kk_psk0_psk2(),
"INpsk1+psk2" => noise_in_psk1_psk2(),
"IKpsk0+psk2" => noise_ik_psk0_psk2(),
"IXpsk0+psk2" => noise_ix_psk0_psk2(),
"XXpsk0+psk1" => noise_xx_psk0_psk1(),
"XXpsk0+psk2" => noise_xx_psk0_psk2(),
"XXpsk0+psk3" => noise_xx_psk0_psk3(),
"XXpsk0+psk1+psk2+psk3" => noise_xx_psk0_psk1_psk2_psk3(),
_ => return None,
})
}
fn run_noise_responder(
t: &mut Activation,
spec: ValidatedNoiseSpec,
observer: Arc<Cap>,
service: Arc<Cap>,
) -> ActorResult {
let hs = {
let mut builder = noise_protocol::HandshakeStateBuilder::new();
builder.set_pattern(spec.pattern);
builder.set_is_initiator(false);
let prologue = PackedWriter::encode(&mut NoEmbeddedDomainCodec, &spec.service)?;
builder.set_prologue(&prologue);
match spec.secret_key {
None => (),
Some(sk) => {
let sk: [u8; 32] = sk.try_into().map_err(|_| "Bad secret key length")?;
builder.set_s(U8Array::from_slice(&sk));
},
}
let mut hs = builder.build_handshake_state();
for psk in spec.psks.into_iter() {
hs.push_psk(&psk);
}
hs
};
let responder_session =
Cap::guard(crate::Language::arc(), t.create(ResponderState::Introduction{ service, hs }));
observer.assert(t, language(), &gatekeeper::Resolved::Accepted { responder_session });
Ok(())
}

View File

@ -0,0 +1,195 @@
use std::convert::TryInto;
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
use std::sync::atomic::Ordering;
use hyper::{Request, Response, Body, StatusCode};
use hyper::body;
use hyper::header::HeaderName;
use hyper::header::HeaderValue;
use syndicate::actor::*;
use syndicate::error::Error;
use syndicate::trace;
use syndicate::value::Map;
use syndicate::value::NestedValue;
use syndicate::schemas::http;
use tokio::sync::oneshot;
use tokio::sync::mpsc::{UnboundedSender, unbounded_channel};
use tokio_stream::wrappers::UnboundedReceiverStream;
use crate::language;
static NEXT_SEQ: AtomicU64 = AtomicU64::new(0);
pub fn empty_response(code: StatusCode) -> Response<Body> {
let mut r = Response::new(Body::empty());
*r.status_mut() = code;
r
}
type ChunkItem = Result<body::Bytes, Box<dyn std::error::Error + Send + Sync>>;
struct ResponseCollector {
tx_res: Option<(oneshot::Sender<Response<Body>>, Response<Body>)>,
body_tx: Option<UnboundedSender<ChunkItem>>,
}
impl ResponseCollector {
fn new(tx: oneshot::Sender<Response<Body>>) -> Self {
let (body_tx, body_rx) = unbounded_channel();
let body_stream: Box<dyn futures::Stream<Item = ChunkItem> + Send> =
Box::new(UnboundedReceiverStream::new(body_rx));
let mut res = Response::new(body_stream.into());
*res.status_mut() = StatusCode::OK;
ResponseCollector {
tx_res: Some((tx, res)),
body_tx: Some(body_tx),
}
}
fn with_res<F: FnOnce(&mut Response<Body>) -> ActorResult>(&mut self, f: F) -> ActorResult {
if let Some((_, res)) = &mut self.tx_res {
f(res)?;
}
Ok(())
}
fn deliver_res(&mut self) {
if let Some((tx, res)) = std::mem::replace(&mut self.tx_res, None) {
let _ = tx.send(res);
}
}
fn add_chunk(&mut self, value: http::Chunk) -> ActorResult {
self.deliver_res();
if let Some(body_tx) = self.body_tx.as_mut() {
body_tx.send(Ok(match value {
http::Chunk::Bytes(bs) => bs.into(),
http::Chunk::String(s) => s.as_bytes().to_vec().into(),
}))?;
}
Ok(())
}
fn finish(&mut self, t: &mut Activation) -> ActorResult {
self.deliver_res();
self.body_tx = None;
t.stop();
Ok(())
}
}
impl Entity<http::HttpResponse> for ResponseCollector {
fn message(&mut self, t: &mut Activation, message: http::HttpResponse) -> ActorResult {
match message {
http::HttpResponse::Status { code, .. } => self.with_res(|r| {
*r.status_mut() = StatusCode::from_u16(
(&code).try_into().map_err(|_| "bad status code")?)?;
Ok(())
}),
http::HttpResponse::Header { name, value } => self.with_res(|r| {
r.headers_mut().insert(HeaderName::from_bytes(name.as_bytes())?,
HeaderValue::from_str(value.as_str())?);
Ok(())
}),
http::HttpResponse::Chunk { chunk } => {
self.add_chunk(*chunk)
}
http::HttpResponse::Done { chunk } => {
self.add_chunk(*chunk)?;
self.finish(t)
}
}
}
}
pub async fn serve(
trace_collector: Option<trace::TraceCollector>,
facet: FacetRef,
httpd: Arc<Cap>,
mut req: Request<Body>,
port: u16,
) -> Result<Response<Body>, Error> {
let host = match req.headers().get("host").and_then(|v| v.to_str().ok()) {
None => http::RequestHost::Absent,
Some(h) => http::RequestHost::Present(match h.rsplit_once(':') {
None => h.to_string(),
Some((h, _port)) => h.to_string(),
})
};
let uri = req.uri();
let mut path: Vec<String> = uri.path().split('/').map(|s| s.to_string()).collect();
path.remove(0);
let mut query: Map<String, Vec<http::QueryValue>> = Map::new();
for piece in uri.query().unwrap_or("").split('&').into_iter() {
match piece.split_once('=') {
Some((k, v)) => {
let key = k.to_string();
let value = v.to_string();
match query.get_mut(&key) {
None => { query.insert(key, vec![http::QueryValue::String(value)]); },
Some(vs) => { vs.push(http::QueryValue::String(value)); },
}
}
None => {
if piece.len() > 0 {
let key = piece.to_string();
if !query.contains_key(&key) {
query.insert(key, vec![]);
}
}
}
}
}
let mut headers: Map<String, String> = Map::new();
for h in req.headers().into_iter() {
match h.1.to_str() {
Ok(v) => { headers.insert(h.0.as_str().to_string().to_lowercase(), v.to_string()); },
Err(_) => return Ok(empty_response(StatusCode::BAD_REQUEST)),
}
}
let body = match body::to_bytes(req.body_mut()).await {
Ok(b) => http::RequestBody::Present(b.to_vec()),
Err(_) => return Ok(empty_response(StatusCode::BAD_REQUEST)),
};
let account = Account::new(Some(AnyValue::symbol("http")), trace_collector);
let (tx, rx) = oneshot::channel();
facet.activate(&account, Some(trace::TurnCause::external("http")), |t| {
t.facet(move |t| {
let sreq = http::HttpRequest {
sequence_number: NEXT_SEQ.fetch_add(1, Ordering::Relaxed).into(),
host,
port: port.into(),
method: req.method().to_string().to_lowercase(),
path,
headers: http::Headers(headers),
query,
body,
};
tracing::debug!(?sreq);
let srep = Cap::guard(&language().syndicate, t.create(ResponseCollector::new(tx)));
httpd.assert(t, language(), &http::HttpContext { req: sreq, res: srep });
Ok(())
})?;
Ok(())
});
let response_result = rx.await;
match response_result {
Ok(response) => Ok(response),
Err(_ /* sender dropped */) => Ok(empty_response(StatusCode::INTERNAL_SERVER_ERROR)),
}
}

View File

@ -0,0 +1,6 @@
use syndicate::actor;
preserves_schema::define_language!(language(): Language<actor::AnyValue> {
syndicate: syndicate::schemas::Language,
server: crate::schemas::Language,
});

View File

@ -0,0 +1,62 @@
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::schemas::service::*;
use syndicate::preserves_schema::support::Unparse;
use crate::language::Language;
use crate::language::language;
use syndicate_macros::on_message;
pub fn updater<'a, N: Clone + Unparse<&'a Language<AnyValue>, AnyValue>>(
ds: Arc<Cap>,
name: N,
) -> impl FnMut(&mut Activation, State) -> ActorResult {
let mut handle = None;
move |t, state| {
ds.update(t, &mut handle, language(), Some(&lifecycle(&name, state)));
Ok(())
}
}
pub fn lifecycle<'a, N: Unparse<&'a Language<AnyValue>, AnyValue>>(
service_name: &N,
state: State,
) -> ServiceState {
ServiceState {
service_name: service_name.unparse(language()),
state,
}
}
pub fn started<'a, N: Unparse<&'a Language<AnyValue>, AnyValue>>(service_name: &N) -> ServiceState {
lifecycle(service_name, State::Started)
}
pub fn ready<'a, N: Unparse<&'a Language<AnyValue>, AnyValue>>(service_name: &N) -> ServiceState {
lifecycle(service_name, State::Ready)
}
pub fn on_service_restart<'a,
N: Unparse<&'a Language<AnyValue>, AnyValue>,
F: 'static + Send + FnMut(&mut Activation) -> ActorResult>(
t: &mut Activation,
ds: &Arc<Cap>,
service_name: &N,
mut f: F,
) {
on_message!(t, ds, language(), <restart-service #(&service_name.unparse(language()))>, f);
}
pub fn terminate_on_service_restart<'a, N: Unparse<&'a Language<AnyValue>, AnyValue>>(
t: &mut Activation,
ds: &Arc<Cap>,
service_name: &N,
) {
on_service_restart(t, ds, service_name, |t| {
tracing::info!("Terminating to restart");
t.stop_root();
Ok(())
});
}

View File

@ -0,0 +1,243 @@
use preserves_schema::Codec;
use std::convert::TryInto;
use std::io;
use std::path::PathBuf;
use std::sync::Arc;
use structopt::StructOpt;
use syndicate::actor::*;
use syndicate::dataspace::*;
use syndicate::enclose;
use syndicate::relay;
use syndicate::schemas::service;
use syndicate::schemas::transport_address;
use syndicate::trace;
use syndicate::value::Map;
use syndicate::value::NestedValue;
mod counter;
mod dependencies;
mod gatekeeper;
mod http;
mod language;
mod lifecycle;
mod protocol;
mod script;
mod services;
#[cfg(feature = "jemalloc")]
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
mod schemas {
include!(concat!(env!("OUT_DIR"), "/src/schemas/mod.rs"));
}
use language::Language;
use language::language;
use schemas::internal_services;
#[derive(Clone, StructOpt)]
struct ServerConfig {
#[structopt(short = "p", long = "port")]
ports: Vec<u16>,
#[structopt(short = "s", long = "socket")]
sockets: Vec<PathBuf>,
#[structopt(long)]
inferior: bool,
#[structopt(long)]
debt_reporter: bool,
#[structopt(short = "c", long)]
config: Vec<PathBuf>,
#[structopt(long)]
no_banner: bool,
#[structopt(short = "t", long)]
trace_file: Option<PathBuf>,
/// Enable `$control` entity.
#[structopt(long)]
control: bool,
}
#[tokio::main]
async fn main() -> ActorResult {
let config = Arc::new(ServerConfig::from_args());
syndicate::convenient_logging()?;
if !config.no_banner && !config.inferior {
const BRIGHT_GREEN: &str = "\x1b[92m";
const RED: &str = "\x1b[31m";
const GREEN: &str = "\x1b[32m";
const NORMAL: &str = "\x1b[0m";
const BRIGHT_YELLOW: &str = "\x1b[93m";
eprintln!(r"{} ______ {}", GREEN, NORMAL);
eprintln!(r"{} / {}\_{}\{} ", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
eprintln!(r"{} / {},{}__/{} \ {} ____ __", GREEN, RED, BRIGHT_GREEN, GREEN, NORMAL);
eprintln!(r"{} /{}\__/ \{},{} \{} _______ ______ ____/ /_/________ / /____", GREEN, BRIGHT_GREEN, RED, GREEN, NORMAL);
eprintln!(r"{} \{}/ \__/ {}/{} / ___/ / / / __ \/ __ / / ___/ __ \/ __/ _ \", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
eprintln!(r"{} \ {}'{} \__{}/ {} _\_ \/ /_/ / / / / /_/ / / /__/ /_/ / /_/ __/", GREEN, RED, BRIGHT_GREEN, GREEN, NORMAL);
eprintln!(r"{} \____{}/{}_/ {} /____/\__, /_/ /_/\____/_/\___/\__/_/\__/\___/", GREEN, BRIGHT_GREEN, GREEN, NORMAL);
eprintln!(r" /____/");
eprintln!(r"");
eprintln!(r" {}version {} [syndicate {}]{}", BRIGHT_YELLOW, env!("CARGO_PKG_VERSION"), syndicate::syndicate_package_version(), NORMAL);
eprintln!(r"");
eprintln!(r" documentation & reference material: https://syndicate-lang.org/");
eprintln!(r" source code & bugs: https://git.syndicate-lang.org/syndicate-lang/syndicate-rs");
eprintln!(r"");
}
tracing::trace!("startup");
let trace_collector = config.trace_file.clone().map(
|p| Ok::<trace::TraceCollector, io::Error>(trace::TraceCollector::ascii(
io::BufWriter::new(std::fs::File::create(p)?))))
.transpose()?;
Actor::top(trace_collector, move |t| {
let server_config_ds = Cap::new(&t.create(Dataspace::new(Some(AnyValue::symbol("config")))));
let log_ds = Cap::new(&t.create(Dataspace::new(Some(AnyValue::symbol("log")))));
if config.inferior {
tracing::info!("inferior server instance");
t.spawn(Some(AnyValue::symbol("parent")), enclose!((server_config_ds) move |t| {
protocol::run_io_relay(t,
relay::Input::Bytes(Box::pin(tokio::io::stdin())),
relay::Output::Bytes(Box::pin(tokio::io::stdout())),
server_config_ds)
}));
}
let gatekeeper = Cap::guard(Language::arc(), t.create(
syndicate::entity(Arc::clone(&server_config_ds))
.on_asserted_facet(gatekeeper::facet_handle_resolve)));
gatekeeper::handle_binds(t, &server_config_ds)?;
let mut env = Map::new();
env.insert("config".to_owned(), AnyValue::domain(Arc::clone(&server_config_ds)));
env.insert("log".to_owned(), AnyValue::domain(Arc::clone(&log_ds)));
env.insert("gatekeeper".to_owned(), AnyValue::domain(Arc::clone(&gatekeeper)));
if config.control {
env.insert("control".to_owned(), AnyValue::domain(Cap::guard(Language::arc(), t.create(
syndicate::entity(())
.on_message(|_, _t, m: crate::schemas::control::ExitServer| {
tracing::info!("$control received exit request with code {}", m.code);
std::process::exit((&m.code).try_into().unwrap_or_else(|_| {
tracing::warn!(
"exit code {} out-of-range of 32-bit signed integer, using 1 instead",
m.code);
1
}))
})))));
}
dependencies::boot(t, Arc::clone(&server_config_ds));
services::config_watcher::on_demand(t, Arc::clone(&server_config_ds));
services::daemon::on_demand(t, Arc::clone(&server_config_ds), Arc::clone(&log_ds));
services::debt_reporter::on_demand(t, Arc::clone(&server_config_ds));
services::gatekeeper::on_demand(t, Arc::clone(&server_config_ds));
services::http_router::on_demand(t, Arc::clone(&server_config_ds));
services::tcp_relay_listener::on_demand(t, Arc::clone(&server_config_ds));
services::unix_relay_listener::on_demand(t, Arc::clone(&server_config_ds));
if config.debt_reporter {
server_config_ds.assert(t, language(), &service::RunService {
service_name: language().unparse(&internal_services::DebtReporter {
interval_seconds: (1.0).into(),
}),
});
}
for port in config.ports.clone() {
server_config_ds.assert(t, language(), &service::RunService {
service_name: language().unparse(&internal_services::TcpWithoutHttp {
addr: transport_address::Tcp {
host: "0.0.0.0".to_owned(),
port: (port as i32).into(),
},
gatekeeper: gatekeeper.clone(),
}),
});
}
for path in config.sockets.clone() {
server_config_ds.assert(t, language(), &service::RunService {
service_name: language().unparse(&internal_services::UnixRelayListener {
addr: transport_address::Unix {
path: path.to_str().expect("representable UnixListener path").to_owned(),
},
gatekeeper: gatekeeper.clone(),
}),
});
}
for path in config.config.clone() {
server_config_ds.assert(t, language(), &service::RunService {
service_name: language().unparse(&internal_services::ConfigWatcher {
path: path.to_str().expect("representable ConfigWatcher path").to_owned(),
env: internal_services::ConfigEnv(env.clone()),
}),
});
}
t.spawn(Some(AnyValue::symbol("logger")), enclose!((log_ds) move |t| {
let n_unknown: AnyValue = AnyValue::symbol("-");
let n_pid: AnyValue = AnyValue::symbol("pid");
let n_line: AnyValue = AnyValue::symbol("line");
let n_service: AnyValue = AnyValue::symbol("service");
let n_stream: AnyValue = AnyValue::symbol("stream");
let e = syndicate::during::entity(())
.on_message(move |(), _t, captures: AnyValue| {
if let Some(captures) = captures.value_owned().into_sequence() {
let mut captures = captures.into_iter();
let timestamp = captures.next()
.and_then(|t| t.value_owned().into_string())
.unwrap_or_else(|| "-".to_owned());
if let Some(mut d) = captures.next()
.and_then(|d| d.value_owned().into_dictionary())
{
let pid = d.remove(&n_pid).unwrap_or_else(|| n_unknown.clone());
let service = d.remove(&n_service).unwrap_or_else(|| n_unknown.clone());
let line = d.remove(&n_line).unwrap_or_else(|| n_unknown.clone());
let stream = d.remove(&n_stream).unwrap_or_else(|| n_unknown.clone());
let message = format!("{} {:?}[{:?}] {:?}: {:?}",
timestamp,
service,
pid,
stream,
line);
if d.is_empty() {
tracing::info!(target: "", "{}", message);
} else {
tracing::info!(target: "", "{} {:?}", message, d);
}
}
}
Ok(())
})
.create_cap(t);
log_ds.assert(t, language(), &syndicate::schemas::dataspace::Observe {
pattern: syndicate_macros::pattern!(<log $ $>),
observer: e,
});
Ok(())
}));
Ok(())
}).await??;
wait_for_all_actors_to_stop(std::time::Duration::from_secs(2)).await;
Ok(())
}

View File

@ -0,0 +1,155 @@
use futures::SinkExt;
use futures::StreamExt;
use hyper::header::HeaderValue;
use hyper::service::service_fn;
use std::future::ready;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::error::Error;
use syndicate::error::error;
use syndicate::relay;
use syndicate::trace;
use syndicate::value::NestedValue;
use tokio::net::TcpStream;
use hyper_tungstenite::tungstenite::Message;
struct ExitListener;
impl Entity<()> for ExitListener {
fn exit_hook(&mut self, _t: &mut Activation, exit_status: &Arc<ExitStatus>) {
tracing::info!(?exit_status, "disconnect");
}
}
pub fn run_io_relay(
t: &mut Activation,
i: relay::Input,
o: relay::Output,
initial_ref: Arc<Cap>,
) -> ActorResult {
let exit_listener = t.create(ExitListener);
t.add_exit_hook(&exit_listener);
relay::TunnelRelay::run(t, i, o, Some(initial_ref), None, false);
Ok(())
}
pub fn run_connection(
trace_collector: Option<trace::TraceCollector>,
facet: FacetRef,
i: relay::Input,
o: relay::Output,
initial_ref: Arc<Cap>,
) {
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("start-session"));
let account = Account::new(Some(AnyValue::symbol("start-session")), trace_collector);
facet.activate(&account, cause, |t| run_io_relay(t, i, o, initial_ref));
}
pub async fn detect_protocol(
trace_collector: Option<trace::TraceCollector>,
facet: FacetRef,
stream: TcpStream,
gateway: Arc<Cap>,
httpd: Option<Arc<Cap>>,
addr: std::net::SocketAddr,
server_port: u16,
) -> ActorResult {
let mut buf = [0; 1]; // peek at the first byte to see what kind of connection to expect
match stream.peek(&mut buf).await? {
1 => match buf[0] {
v if v == b'[' /* Turn */ || v == b'<' /* Error and Extension */ || v >= 128 => {
tracing::info!(protocol = %(if v >= 128 { "application/syndicate" } else { "text/syndicate" }), peer = ?addr);
let (i, o) = stream.into_split();
let i = relay::Input::Bytes(Box::pin(i));
let o = relay::Output::Bytes(Box::pin(o /* BufWriter::new(o) */));
run_connection(trace_collector, facet, i, o, gateway);
Ok(())
}
_ => {
let upgraded = Arc::new(AtomicBool::new(false));
let keepalive = facet.actor.keep_alive();
let mut http = hyper::server::conn::Http::new();
http.http1_keep_alive(true);
http.http1_only(true);
let service = service_fn(|mut req| enclose!(
(upgraded, keepalive, trace_collector, facet, gateway, httpd) async move {
if hyper_tungstenite::is_upgrade_request(&req) {
tracing::info!(protocol = %"websocket",
method=%req.method(),
uri=?req.uri(),
host=?req.headers().get("host").unwrap_or(&HeaderValue::from_static("")));
let (response, websocket) = hyper_tungstenite::upgrade(&mut req, None)
.map_err(|e| message_error(e))?;
upgraded.store(true, Ordering::SeqCst);
tokio::spawn(enclose!(() async move {
let (o, i) = websocket.await?.split();
let i = i.filter_map(|r| ready(extract_binary_packets(r).transpose()));
let o = o.sink_map_err(message_error).with(|bs| ready(Ok(Message::Binary(bs))));
let i = relay::Input::Packets(Box::pin(i));
let o = relay::Output::Packets(Box::pin(o));
run_connection(trace_collector, facet, i, o, gateway);
drop(keepalive);
Ok(()) as ActorResult
}));
Ok(response)
} else {
match httpd {
None => Ok(crate::http::empty_response(
hyper::StatusCode::SERVICE_UNAVAILABLE)),
Some(httpd) => {
tracing::info!(protocol = %"http",
method=%req.method(),
uri=?req.uri(),
host=?req.headers().get("host").unwrap_or(&HeaderValue::from_static("")));
crate::http::serve(trace_collector, facet, httpd, req, server_port).await
}
}
}
}));
http.serve_connection(stream, service).with_upgrades().await?;
if upgraded.load(Ordering::SeqCst) {
tracing::debug!("serve_connection completed after upgrade to websocket");
} else {
tracing::debug!("serve_connection completed after regular HTTP session");
facet.activate(&Account::new(None, None), None, |t| Ok(t.stop()));
}
Ok(())
},
}
0 => Err(error("closed before starting", AnyValue::new(false)))?,
_ => unreachable!()
}
}
fn message_error<E: std::fmt::Display>(e: E) -> Error {
error(&e.to_string(), AnyValue::new(false))
}
fn extract_binary_packets(
r: Result<Message, hyper_tungstenite::tungstenite::Error>,
) -> Result<Option<Vec<u8>>, Error> {
match r {
Ok(m) => match m {
Message::Text(_) =>
Err("Text websocket frames are not accepted")?,
Message::Binary(bs) =>
Ok(Some(bs)),
Message::Ping(_) =>
Ok(None), // pings are handled by tungstenite before we see them
Message::Pong(_) =>
Ok(None), // unsolicited pongs are to be ignored
Message::Close(_) =>
Ok(None), // we're about to see the end of the stream, so ignore this
Message::Frame(_) =>
Err("Raw frames are not accepted")?,
},
Err(e) => Err(message_error(e)),
}
}

View File

@ -0,0 +1,941 @@
use preserves_schema::Codec;
use std::io;
use std::borrow::Cow;
use std::path::PathBuf;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::dataspace::Dataspace;
use syndicate::during;
use syndicate::enclose;
use syndicate::pattern::{lift_literal, drop_literal, pattern_seq_from_dictionary};
use syndicate::schemas::dataspace;
use syndicate::schemas::dataspace_patterns as P;
use syndicate::schemas::sturdy;
use syndicate::value::Map;
use syndicate::value::NestedValue;
use syndicate::value::NoEmbeddedDomainCodec;
use syndicate::value::Record;
use syndicate::value::Set;
use syndicate::value::TextWriter;
use syndicate::value::Value;
use crate::language::language;
#[derive(Debug)]
struct PatternInstantiator<'env> {
env: &'env Env,
binding_names: Vec<String>,
}
#[derive(Debug, Clone)]
pub struct Env {
pub path: PathBuf,
bindings: Map<String, AnyValue>,
}
#[derive(Debug)]
pub struct Parser<'t> {
tokens: &'t [AnyValue],
errors: Vec<String>,
}
#[derive(Debug)]
pub enum Parsed<T> {
Value(T),
Skip,
Eof,
}
#[derive(Debug, Clone)]
pub enum Instruction {
Assert {
target: String,
template: AnyValue,
},
Message {
target: String,
template: AnyValue,
},
During {
target: String,
pattern_template: AnyValue,
body: Box<Instruction>,
},
OnMessage {
target: String,
pattern_template: AnyValue,
body: Box<Instruction>,
},
OnStop {
body: Box<Instruction>,
},
Sequence {
instructions: Vec<Instruction>,
},
Let {
pattern_template: AnyValue,
expr: Expr,
},
Cond {
value_var: String,
pattern_template: AnyValue,
on_match: Box<Instruction>,
on_nomatch: Box<Instruction>,
},
}
#[derive(Debug, Clone)]
pub enum Expr {
Template {
template: AnyValue,
},
Dataspace,
Timestamp,
Facet,
Stringify {
expr: Box<Expr>,
},
}
#[derive(Debug, Clone)]
enum RewriteTemplate {
Accept {
pattern_template: AnyValue,
},
Rewrite {
pattern_template: AnyValue,
template_template: AnyValue,
},
}
#[derive(Debug, Clone)]
enum CaveatTemplate {
Alts {
alternatives: Vec<RewriteTemplate>,
},
Reject {
pattern_template: AnyValue,
},
}
#[derive(Debug)]
enum Symbolic {
Reference(String),
Binder(String),
Discard,
Literal(String),
Bare(String),
}
struct FacetHandle;
impl<T> Default for Parsed<T> {
fn default() -> Self {
Parsed::Skip
}
}
impl FacetHandle {
fn new() -> Self {
FacetHandle
}
}
impl Entity<AnyValue> for FacetHandle {
fn message(&mut self, t: &mut Activation, body: AnyValue) -> ActorResult {
if let Some("stop") = body.value().as_symbol().map(|s| s.as_str()) {
t.stop();
return Ok(())
}
tracing::warn!(?body, "Unrecognised message sent to FacetHandle");
return Ok(())
}
}
fn analyze(s: &str) -> Symbolic {
if s == "_" {
Symbolic::Discard
} else if s.starts_with("?") {
Symbolic::Binder(s[1..].to_owned())
} else if s.starts_with("$") {
Symbolic::Reference(s[1..].to_owned())
} else if s.starts_with("=") {
Symbolic::Literal(s[1..].to_owned())
} else {
Symbolic::Bare(s.to_owned())
}
}
fn bad_instruction(message: &str) -> io::Error {
io::Error::new(io::ErrorKind::InvalidData, message)
}
fn discard() -> P::Pattern {
P::Pattern::Discard
}
fn dlit(value: AnyValue) -> P::Pattern {
lift_literal(&value)
}
fn tlit(value: AnyValue) -> sturdy::Template {
sturdy::Template::Lit(Box::new(sturdy::Lit { value }))
}
fn parse_rewrite(raw_base_name: &AnyValue, e: &AnyValue) -> io::Result<RewriteTemplate> {
if let Some(fields) = e.value().as_simple_record("accept", Some(1)) {
return Ok(RewriteTemplate::Accept {
pattern_template: fields[0].clone(),
});
}
if let Some(fields) = e.value().as_simple_record("rewrite", Some(2)) {
return Ok(RewriteTemplate::Rewrite {
pattern_template: fields[0].clone(),
template_template: fields[1].clone(),
});
}
Err(bad_instruction(&format!("Bad rewrite in attenuation of {:?}: {:?}", raw_base_name, e)))
}
fn parse_caveat(raw_base_name: &AnyValue, e: &AnyValue) -> io::Result<CaveatTemplate> {
if let Some(fields) = e.value().as_simple_record("or", Some(1)) {
let raw_rewrites = match fields[0].value().as_sequence() {
None => Err(bad_instruction(&format!(
"Alternatives in <or> in attenuation of {:?} must have sequence of rewrites; got {:?}",
raw_base_name,
fields[0])))?,
Some(vs) => vs,
};
let alternatives =
raw_rewrites.iter().map(|r| parse_rewrite(raw_base_name, r)).collect::<Result<Vec<_>, _>>()?;
return Ok(CaveatTemplate::Alts{ alternatives });
}
if let Some(fields) = e.value().as_simple_record("reject", Some(1)) {
return Ok(CaveatTemplate::Reject{ pattern_template: fields[0].clone() });
}
if let Ok(r) = parse_rewrite(raw_base_name, e) {
return Ok(CaveatTemplate::Alts { alternatives: vec![r] });
}
Err(bad_instruction(&format!("Bad caveat in attenuation of {:?}: {:?}", raw_base_name, e)))
}
fn parse_attenuation(r: &Record<AnyValue>) -> io::Result<Option<(String, Vec<CaveatTemplate>)>> {
if r.label() != &AnyValue::symbol("*") {
return Ok(None);
}
if r.fields().len() != 2 {
Err(bad_instruction(&format!(
"Attenuation requires a reference and a sequence of caveats; got {:?}",
r)))?;
}
let raw_base_name = &r.fields()[0];
let base_name = match raw_base_name.value().as_symbol().map(|s| analyze(&s)) {
Some(Symbolic::Reference(s)) => s,
_ => Err(bad_instruction(&format!(
"Attenuation must have variable reference as first argument; got {:?}",
raw_base_name)))?,
};
let raw_caveats = match r.fields()[1].value().as_sequence() {
None => Err(bad_instruction(&format!(
"Attenuation of {:?} must have sequence of caveats; got {:?}",
raw_base_name,
r.fields()[1])))?,
Some(vs) => vs,
};
let caveats = raw_caveats.iter().map(|c| parse_caveat(raw_base_name, c)).collect::<Result<Vec<_>, _>>()?;
Ok(Some((base_name, caveats)))
}
impl<'env> PatternInstantiator<'env> {
fn instantiate_pattern(&mut self, template: &AnyValue) -> io::Result<P::Pattern> {
Ok(match template.value() {
Value::Boolean(_) |
Value::Double(_) |
Value::SignedInteger(_) |
Value::String(_) |
Value::ByteString(_) |
Value::Embedded(_) =>
dlit(template.clone()),
Value::Symbol(s) => match analyze(s) {
Symbolic::Discard => discard(),
Symbolic::Binder(s) => {
self.binding_names.push(s);
P::Pattern::Bind { pattern: Box::new(discard()) }
}
Symbolic::Reference(s) =>
dlit(self.env.lookup(&s, "pattern-template variable")?.clone()),
Symbolic::Literal(s) | Symbolic::Bare(s) =>
dlit(Value::Symbol(s).wrap()),
},
Value::Record(r) => match parse_attenuation(r)? {
Some((base_name, caveats)) =>
dlit(self.env.eval_attenuation(base_name, caveats)?),
None => match self.maybe_binder_with_pattern(r)? {
Some(pat) => pat,
None => {
let label = self.instantiate_pattern(r.label())?;
let entries = r.fields().iter().enumerate()
.map(|(i, p)| Ok((AnyValue::new(i), self.instantiate_pattern(p)?)))
.collect::<io::Result<Map<AnyValue, P::Pattern>>>()?;
P::Pattern::Group {
type_: Box::new(P::GroupType::Rec {
label: drop_literal(&label)
.ok_or(bad_instruction("Record pattern must have literal label"))?,
}),
entries,
}
}
}
},
Value::Sequence(v) =>
P::Pattern::Group {
type_: Box::new(P::GroupType::Arr),
entries: v.iter().enumerate()
.map(|(i, p)| Ok((AnyValue::new(i), self.instantiate_pattern(p)?)))
.collect::<io::Result<Map<AnyValue, P::Pattern>>>()?,
},
Value::Set(_) =>
Err(bad_instruction(&format!("Sets not permitted in patterns: {:?}", template)))?,
Value::Dictionary(v) =>
P::Pattern::Group {
type_: Box::new(P::GroupType::Dict),
entries: v.iter()
.map(|(a, b)| Ok((a.clone(), self.instantiate_pattern(b)?)))
.collect::<io::Result<Map<AnyValue, P::Pattern>>>()?,
},
})
}
fn maybe_binder_with_pattern(&mut self, r: &Record<AnyValue>) -> io::Result<Option<P::Pattern>> {
match r.label().value().as_symbol().map(|s| analyze(&s)) {
Some(Symbolic::Binder(formal)) if r.fields().len() == 1 => {
let pattern = self.instantiate_pattern(&r.fields()[0])?;
self.binding_names.push(formal);
Ok(Some(P::Pattern::Bind { pattern: Box::new(pattern) }))
},
_ => Ok(None),
}
}
}
impl Env {
pub fn new(path: PathBuf, bindings: Map<String, AnyValue>) -> Self {
Env {
path: path.clone(),
bindings,
}
}
pub fn clone_with_path(&self, path: PathBuf) -> Self {
Env {
path,
bindings: self.bindings.clone(),
}
}
fn lookup_target(&self, s: &str) -> io::Result<Arc<Cap>> {
Ok(self.lookup(s, "target variable")?.value().to_embedded()?.clone())
}
fn lookup(&self, s: &str, what: &'static str) -> io::Result<AnyValue> {
if s == "." {
Ok(AnyValue::new(self.bindings.iter().map(|(k, v)| (AnyValue::symbol(k), v.clone()))
.collect::<Map<AnyValue, AnyValue>>()))
} else {
Ok(self.bindings.get(s).ok_or_else(
|| bad_instruction(&format!("Undefined {}: {:?}", what, s)))?.clone())
}
}
fn instantiate_pattern(
&self,
pattern_template: &AnyValue,
) -> io::Result<(Vec<String>, P::Pattern)> {
let mut inst = PatternInstantiator {
env: self,
binding_names: Vec::new(),
};
let pattern = inst.instantiate_pattern(pattern_template)?;
Ok((inst.binding_names, pattern))
}
fn instantiate_value(&self, template: &AnyValue) -> io::Result<AnyValue> {
Ok(match template.value() {
Value::Boolean(_) |
Value::Double(_) |
Value::SignedInteger(_) |
Value::String(_) |
Value::ByteString(_) |
Value::Embedded(_) =>
template.clone(),
Value::Symbol(s) => match analyze(s) {
Symbolic::Binder(_) | Symbolic::Discard =>
Err(bad_instruction(&format!(
"Invalid use of wildcard in template: {:?}", template)))?,
Symbolic::Reference(s) =>
self.lookup(&s, "template variable")?,
Symbolic::Literal(s) | Symbolic::Bare(s) =>
Value::Symbol(s).wrap(),
},
Value::Record(r) => match parse_attenuation(r)? {
Some((base_name, caveats)) =>
self.eval_attenuation(base_name, caveats)?,
None =>
Value::Record(Record(r.fields_vec().iter().map(|a| self.instantiate_value(a))
.collect::<Result<Vec<_>, _>>()?)).wrap(),
},
Value::Sequence(v) =>
Value::Sequence(v.iter().map(|a| self.instantiate_value(a))
.collect::<Result<Vec<_>, _>>()?).wrap(),
Value::Set(v) =>
Value::Set(v.iter().map(|a| self.instantiate_value(a))
.collect::<Result<Set<_>, _>>()?).wrap(),
Value::Dictionary(v) =>
Value::Dictionary(v.iter().map(|(a,b)| Ok((self.instantiate_value(a)?,
self.instantiate_value(b)?)))
.collect::<io::Result<Map<_, _>>>()?).wrap(),
})
}
pub fn safe_eval(&mut self, t: &mut Activation, i: &Instruction) -> bool {
match self.eval(t, i) {
Ok(()) => true,
Err(error) => {
tracing::error!(path = ?self.path, ?error);
t.stop();
false
}
}
}
pub fn extend(&mut self, binding_names: &Vec<String>, captures: Vec<AnyValue>) {
for (k, v) in binding_names.iter().zip(captures) {
self.bindings.insert(k.clone(), v);
}
}
fn eval_attenuation(
&self,
base_name: String,
caveats: Vec<CaveatTemplate>,
) -> io::Result<AnyValue> {
let base_value = self.lookup(&base_name, "attenuation-base variable")?;
match base_value.value().as_embedded() {
None => Err(bad_instruction(&format!(
"Value to be attenuated is {:?} but must be capability",
base_value))),
Some(base_cap) => {
match base_cap.attenuate(&caveats.iter().map(|c| self.instantiate_caveat(c)).collect::<Result<Vec<_>, _>>()?) {
Ok(derived_cap) => Ok(AnyValue::domain(derived_cap)),
Err(caveat_error) =>
Err(bad_instruction(&format!("Attenuation of {:?} failed: {:?}",
base_value,
caveat_error))),
}
}
}
}
fn bind_and_run(
&self,
t: &mut Activation,
binding_names: &Vec<String>,
captures: AnyValue,
body: &Instruction,
) -> ActorResult {
if let Some(captures) = captures.value_owned().into_sequence() {
let mut env = self.clone();
env.extend(binding_names, captures);
env.safe_eval(t, body);
}
Ok(())
}
pub fn eval(&mut self, t: &mut Activation, i: &Instruction) -> io::Result<()> {
match i {
Instruction::Assert { target, template } => {
self.lookup_target(target)?.assert(t, &(), &self.instantiate_value(template)?);
}
Instruction::Message { target, template } => {
self.lookup_target(target)?.message(t, &(), &self.instantiate_value(template)?);
}
Instruction::During { target, pattern_template, body } => {
let (binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
let observer = during::entity(self.clone())
.on_asserted_facet(enclose!((binding_names, body) move |env, t, cs: AnyValue| {
env.bind_and_run(t, &binding_names, cs, &*body) }))
.create_cap(t);
self.lookup_target(target)?.assert(t, language(), &dataspace::Observe {
pattern,
observer,
});
}
Instruction::OnMessage { target, pattern_template, body } => {
let (binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
let observer = during::entity(self.clone())
.on_message(enclose!((binding_names, body) move |env, t, cs: AnyValue| {
t.facet(|t| env.bind_and_run(t, &binding_names, cs, &*body))?;
Ok(())
}))
.create_cap(t);
self.lookup_target(target)?.assert(t, language(), &dataspace::Observe {
pattern,
observer,
});
}
Instruction::OnStop { body } => {
let mut env = self.clone();
t.on_stop(enclose!((body) move |t| Ok(env.eval(t, &*body)?)));
}
Instruction::Sequence { instructions } => {
for i in instructions {
self.eval(t, i)?;
}
}
Instruction::Let { pattern_template, expr } => {
let (binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
let value = self.eval_expr(t, expr)?;
match pattern.match_value(&value) {
None => Err(bad_instruction(
&format!("Could not match pattern {:?} with value {:?}",
pattern_template,
value)))?,
Some(captures) => {
self.extend(&binding_names, captures);
}
}
}
Instruction::Cond { value_var, pattern_template, on_match, on_nomatch } => {
let (binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
let value = self.lookup(value_var, "value in conditional expression")?;
match pattern.match_value(&value) {
None => self.eval(t, on_nomatch)?,
Some(captures) => {
self.extend(&binding_names, captures);
self.eval(t, on_match)?
}
}
}
}
Ok(())
}
pub fn eval_expr(&self, t: &mut Activation, e: &Expr) -> io::Result<AnyValue> {
match e {
Expr::Template { template } => self.instantiate_value(template),
Expr::Dataspace => Ok(AnyValue::domain(Cap::new(&t.create(Dataspace::new(None))))),
Expr::Timestamp => Ok(AnyValue::new(chrono::Utc::now().to_rfc3339())),
Expr::Facet => Ok(AnyValue::domain(Cap::new(&t.create(FacetHandle::new())))),
Expr::Stringify { expr } => {
let v = self.eval_expr(t, expr)?;
let s = TextWriter::encode(&mut NoEmbeddedDomainCodec, &v)?;
Ok(AnyValue::new(s))
}
}
}
fn instantiate_rewrite(
&self,
rw: &RewriteTemplate,
) -> io::Result<sturdy::Rewrite> {
match rw {
RewriteTemplate::Accept { pattern_template } => {
let (_binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
Ok(sturdy::Rewrite {
pattern: embed_pattern(&P::Pattern::Bind { pattern: Box::new(pattern) }),
template: sturdy::Template::TRef(Box::new(sturdy::TRef { binding: 0.into() })),
})
}
RewriteTemplate::Rewrite { pattern_template, template_template } => {
let (binding_names, pattern) = self.instantiate_pattern(pattern_template)?;
Ok(sturdy::Rewrite {
pattern: embed_pattern(&pattern),
template: self.instantiate_template(&binding_names, template_template)?,
})
}
}
}
fn instantiate_caveat(
&self,
c: &CaveatTemplate,
) -> io::Result<sturdy::Caveat> {
match c {
CaveatTemplate::Alts { alternatives } => {
let mut rewrites =
alternatives.iter().map(|r| self.instantiate_rewrite(r)).collect::<Result<Vec<_>, _>>()?;
if rewrites.len() == 1 {
Ok(sturdy::Caveat::Rewrite(Box::new(rewrites.pop().unwrap())))
} else {
Ok(sturdy::Caveat::Alts(Box::new(sturdy::Alts {
alternatives: rewrites,
})))
}
}
CaveatTemplate::Reject { pattern_template } => {
Ok(sturdy::Caveat::Reject(Box::new(
sturdy::Reject {
pattern: embed_pattern(&self.instantiate_pattern(pattern_template)?.1),
})))
}
}
}
fn instantiate_template(
&self,
binding_names: &Vec<String>,
template: &AnyValue,
) -> io::Result<sturdy::Template> {
let find_bound = |s: &str| {
binding_names.iter().enumerate().find(|(_i, n)| *n == s).map(|(i, _n)| i)
};
Ok(match template.value() {
Value::Boolean(_) |
Value::Double(_) |
Value::SignedInteger(_) |
Value::String(_) |
Value::ByteString(_) |
Value::Embedded(_) =>
tlit(template.clone()),
Value::Symbol(s) => match analyze(s) {
Symbolic::Binder(_) | Symbolic::Discard =>
Err(bad_instruction(&format!(
"Invalid use of wildcard in template: {:?}", template)))?,
Symbolic::Reference(s) =>
match find_bound(&s) {
Some(i) =>
sturdy::Template::TRef(Box::new(sturdy::TRef { binding: i.into() })),
None =>
tlit(self.lookup(&s, "attenuation-template variable")?),
},
Symbolic::Literal(s) | Symbolic::Bare(s) =>
tlit(Value::Symbol(s).wrap()),
},
Value::Record(r) => match parse_attenuation(r)? {
Some((base_name, caveats)) =>
match find_bound(&base_name) {
Some(i) =>
sturdy::Template::TAttenuate(Box::new(sturdy::TAttenuate {
template: sturdy::Template::TRef(Box::new(sturdy::TRef {
binding: i.into(),
})),
attenuation: caveats.iter()
.map(|c| self.instantiate_caveat(c))
.collect::<Result<Vec<_>, _>>()?,
})),
None =>
tlit(self.eval_attenuation(base_name, caveats)?),
},
None => {
// TODO: properly consolidate constant templates into literals.
match self.instantiate_template(binding_names, r.label())? {
sturdy::Template::Lit(b) =>
sturdy::Template::TCompound(Box::new(sturdy::TCompound::Rec {
label: b.value,
fields: r.fields().iter()
.map(|t| self.instantiate_template(binding_names, t))
.collect::<io::Result<Vec<sturdy::Template>>>()?,
})),
_ => Err(bad_instruction("Record template must have literal label"))?,
}
}
},
Value::Sequence(v) =>
sturdy::Template::TCompound(Box::new(sturdy::TCompound::Arr {
items: v.iter()
.map(|p| self.instantiate_template(binding_names, p))
.collect::<io::Result<Vec<sturdy::Template>>>()?,
})),
Value::Set(_) =>
Err(bad_instruction(&format!("Sets not permitted in templates: {:?}", template)))?,
Value::Dictionary(v) =>
sturdy::Template::TCompound(Box::new(sturdy::TCompound::Dict {
entries: v.iter()
.map(|(a, b)| Ok((a.clone(), self.instantiate_template(binding_names, b)?)))
.collect::<io::Result<Map<_, sturdy::Template>>>()?,
})),
})
}
}
fn embed_pattern(p: &P::Pattern) -> sturdy::Pattern {
match p {
P::Pattern::Discard => sturdy::Pattern::PDiscard(Box::new(sturdy::PDiscard)),
P::Pattern::Bind { pattern } => sturdy::Pattern::PBind(Box::new(sturdy::PBind {
pattern: embed_pattern(&**pattern),
})),
P::Pattern::Lit { value } => sturdy::Pattern::Lit(Box::new(sturdy::Lit {
value: language().unparse(&**value),
})),
P::Pattern::Group { type_, entries } => sturdy::Pattern::PCompound(Box::new(match &**type_ {
P::GroupType::Rec { label } =>
sturdy::PCompound::Rec {
label: label.clone(),
fields: pattern_seq_from_dictionary(entries).expect("correct field entries")
.into_iter().map(embed_pattern).collect(),
},
P::GroupType::Arr =>
sturdy::PCompound::Arr {
items: pattern_seq_from_dictionary(entries).expect("correct element entries")
.into_iter().map(embed_pattern).collect(),
},
P::GroupType::Dict =>
sturdy::PCompound::Dict {
entries: entries.iter().map(|(k, v)| (k.clone(), embed_pattern(v))).collect(),
},
})),
}
}
impl<'t> Parser<'t> {
pub fn new(tokens: &'t [AnyValue]) -> Self {
Parser {
tokens,
errors: Vec::new(),
}
}
fn peek(&mut self) -> &'t Value<AnyValue> {
self.tokens[0].value()
}
fn shift(&mut self) -> AnyValue {
let v = self.tokens[0].clone();
self.drop();
v
}
fn drop(&mut self) {
self.tokens = &self.tokens[1..];
}
fn len(&self) -> usize {
self.tokens.len()
}
fn ateof(&self) -> bool {
self.len() == 0
}
fn error<'a, T: Default, E: Into<Cow<'a, str>>>(&mut self, message: E) -> T {
self.errors.push(message.into().into_owned());
T::default()
}
pub fn parse(&mut self, target: &str, outer_target: &str) -> Parsed<Instruction> {
if self.ateof() {
return Parsed::Eof;
}
if self.peek().is_record() || self.peek().is_dictionary() {
return Parsed::Value(Instruction::Assert {
target: target.to_owned(),
template: self.shift(),
});
}
if let Some(tokens) = self.peek().as_sequence() {
self.drop();
let mut inner_parser = Parser::new(tokens);
let instructions = inner_parser.parse_all(target, outer_target);
self.errors.extend(inner_parser.errors);
return Parsed::Value(Instruction::Sequence { instructions });
}
if let Some(s) = self.peek().as_symbol() {
match analyze(s) {
Symbolic::Binder(s) => {
self.drop();
let ctor = match s.as_ref() {
"" => |target, pattern_template, body| { // "?"
Instruction::During { target, pattern_template, body } },
"?" => |target, pattern_template, body| { // "??"
Instruction::OnMessage { target, pattern_template, body } },
"-" => match self.parse(target, outer_target) { // "?-"
Parsed::Value(i) => return Parsed::Value(Instruction::OnStop {
body: Box::new(i),
}),
other => return other,
},
_ => return self.error(format!(
"Invalid use of pattern binder in target: ?{}", s)),
};
if self.ateof() {
return self.error("Missing pattern and instruction in react");
}
let pattern_template = self.shift();
return match self.parse(target, outer_target) {
Parsed::Eof =>
self.error(format!(
"Missing instruction in react with pattern {:?}",
pattern_template)),
Parsed::Skip =>
Parsed::Skip,
Parsed::Value(body) =>
Parsed::Value(ctor(target.to_owned(),
pattern_template,
Box::new(body))),
};
}
Symbolic::Discard => {
self.drop();
let m = format!("Invalid use of discard in target: {:?}", self.peek());
return self.error(m);
},
Symbolic::Reference(s) => {
self.drop();
if self.ateof() {
let m = format!("Missing instruction after retarget: {:?}", self.peek());
return self.error(m);
}
return self.parse(&s, target);
}
Symbolic::Bare(s) => {
if s == "let" {
self.drop();
if self.len() >= 2 && self.tokens[1].value().as_symbol().map(String::as_str) == Some("=")
{
let pattern_template = self.shift();
self.drop();
return match self.parse_expr() {
Some(expr) =>
Parsed::Value(Instruction::Let { pattern_template, expr }),
None => Parsed::Skip,
};
} else {
return self.error("Invalid let statement");
}
} else if s == "!" {
self.drop();
if self.ateof() {
return self.error("Missing payload after '!'");
}
return Parsed::Value(Instruction::Message {
target: target.to_owned(),
template: self.shift(),
});
} else if s == "+=" {
self.drop();
if self.ateof() {
return self.error("Missing payload after '+='");
}
return Parsed::Value(Instruction::Assert {
target: target.to_owned(),
template: self.shift(),
});
} else {
/* fall through */
}
}
Symbolic::Literal(s) => {
if s == "~" { // "=~"
self.drop();
if self.ateof() {
return self.error("Missing pattern, true-instruction and false-continuation in match");
}
let match_template = self.shift();
return match self.parse(outer_target, outer_target) {
Parsed::Eof =>
self.error(format!(
"Missing true-instruction in conditional with pattern {:?}",
match_template)),
Parsed::Skip =>
Parsed::Skip,
Parsed::Value(true_instruction) => {
let false_instructions = self.parse_all(outer_target, outer_target);
Parsed::Value(Instruction::Cond {
value_var: target.to_owned(),
pattern_template: match_template,
on_match: Box::new(true_instruction),
on_nomatch: Box::new(Instruction::Sequence {
instructions: false_instructions,
}),
})
}
};
} else {
/* fall through */
}
}
}
}
{
let m = format!("Invalid token: {:?}", self.shift());
return self.error(m);
}
}
pub fn parse_all(&mut self, target: &str, outer_target: &str) -> Vec<Instruction> {
let mut instructions = Vec::new();
loop {
match self.parse(target, outer_target) {
Parsed::Value(i) => instructions.push(i),
Parsed::Skip => (),
Parsed::Eof => break,
}
}
instructions
}
pub fn parse_top(&mut self, target: &str) -> Result<Option<Instruction>, Vec<String>> {
let instructions = self.parse_all(target, target);
if self.errors.is_empty() {
match instructions.len() {
0 => Ok(None),
_ => Ok(Some(Instruction::Sequence { instructions })),
}
} else {
Err(std::mem::take(&mut self.errors))
}
}
pub fn parse_expr(&mut self) -> Option<Expr> {
if self.ateof() {
return None;
}
if self.peek() == &Value::symbol("dataspace") {
self.drop();
return Some(Expr::Dataspace);
}
if self.peek() == &Value::symbol("timestamp") {
self.drop();
return Some(Expr::Timestamp);
}
if self.peek() == &Value::symbol("facet") {
self.drop();
return Some(Expr::Facet);
}
if self.peek() == &Value::symbol("stringify") {
self.drop();
return Some(Expr::Stringify { expr: Box::new(self.parse_expr()?) });
}
return Some(Expr::Template{ template: self.shift() });
}
}

View File

@ -0,0 +1,274 @@
use notify::DebouncedEvent;
use notify::Watcher;
use notify::RecursiveMode;
use notify::watcher;
use syndicate::preserves::rec;
use std::fs;
use std::future;
use std::io;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::mpsc::channel;
use std::thread;
use std::time::Duration;
use syndicate::actor::*;
use syndicate::error::Error;
use syndicate::enclose;
use syndicate::supervise::{Supervisor, SupervisorConfiguration};
use syndicate::trace;
use syndicate::value::BinarySource;
use syndicate::value::BytesBinarySource;
use syndicate::value::Map;
use syndicate::value::NestedValue;
use syndicate::value::NoEmbeddedDomainCodec;
use syndicate::value::Reader;
use syndicate::value::ViaCodec;
use crate::language::language;
use crate::lifecycle;
use crate::schemas::internal_services;
use crate::script;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, config_ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("config_watcher")), move |t| {
Ok(during!(t, config_ds, language(), <run-service $spec: internal_services::ConfigWatcher::<AnyValue>>, |t| {
Supervisor::start(
t,
Some(rec![AnyValue::symbol("config"), AnyValue::new(spec.path.clone())]),
SupervisorConfiguration::default(),
enclose!((config_ds, spec) lifecycle::updater(config_ds, spec)),
enclose!((config_ds) move |t| enclose!((config_ds, spec) run(t, config_ds, spec))))
}))
});
}
fn convert_notify_error(e: notify::Error) -> Error {
syndicate::error::error(&format!("Notify error: {:?}", e), AnyValue::new(false))
}
fn process_existing_file(
t: &mut Activation,
mut env: script::Env,
) -> io::Result<Option<FacetId>> {
let mut contents = fs::read(&env.path)?;
contents.append(&mut Vec::from("\n[]".as_bytes())); // improved ergonomics of trailing comments
let tokens: Vec<AnyValue> = BytesBinarySource::new(&contents)
.text::<AnyValue, _>(ViaCodec::new(NoEmbeddedDomainCodec))
.configured(true)
.collect::<Result<Vec<_>, _>>()?;
match script::Parser::new(&tokens).parse_top("config") {
Ok(Some(i)) => Ok(Some(t.facet(|t| {
tracing::debug!("Instructions for file {:?}: {:#?}", &env.path, &i);
env.safe_eval(t, &i);
Ok(())
}).expect("Successful facet startup"))),
Ok(None) => Ok(None),
Err(errors) => {
for e in errors {
tracing::error!(path = ?env.path, message = %e);
}
Ok(None)
}
}
}
fn process_path(
t: &mut Activation,
env: script::Env,
) -> io::Result<Option<FacetId>> {
match fs::metadata(&env.path) {
Ok(md) => if md.is_file() {
process_existing_file(t, env)
} else {
Ok(None)
}
Err(e) => if e.kind() != io::ErrorKind::NotFound {
Err(e)?
} else {
Ok(None)
}
}
}
fn is_hidden(path: &PathBuf) -> bool {
match path.file_name().and_then(|n| n.to_str()) {
Some(n) => n.starts_with("."),
None => true, // ?
}
}
fn should_process(path: &PathBuf) -> bool {
path.file_name().and_then(|n| n.to_str()).map(|n| n.ends_with(".pr")).unwrap_or(false)
}
fn scan_file(
t: &mut Activation,
path_state: &mut Map<PathBuf, FacetId>,
env: script::Env,
) -> bool {
let path = env.path.clone();
if is_hidden(&path) || !should_process(&path) {
return true;
}
tracing::trace!("scan_file: scanning {:?}", &path);
match process_path(t, env) {
Ok(maybe_facet_id) => {
if let Some(facet_id) = maybe_facet_id {
tracing::info!("scan_file: processed {:?}", &path);
path_state.insert(path, facet_id);
}
true
},
Err(e) => {
tracing::error!("scan_file: {:?}: {:?}", &path, e);
false
}
}
}
fn initial_scan(
t: &mut Activation,
path_state: &mut Map<PathBuf, FacetId>,
config_ds: &Arc<Cap>,
env: script::Env,
) {
if is_hidden(&env.path) {
return;
}
match fs::metadata(&env.path) {
Ok(md) => if md.is_file() {
scan_file(t, path_state, env);
} else {
match fs::read_dir(&env.path) {
Ok(unsorted_entries) => {
let mut entries: Vec<fs::DirEntry> = Vec::new();
for er in unsorted_entries {
match er {
Ok(e) =>
entries.push(e),
Err(e) =>
tracing::warn!(
"initial_scan: transient during scan of {:?}: {:?}", &env.path, e),
}
}
entries.sort_by_key(|e| e.file_name());
for e in entries {
initial_scan(t, path_state, config_ds, env.clone_with_path(e.path()));
}
}
Err(e) => tracing::warn!("initial_scan: enumerating {:?}: {:?}", &env.path, e),
}
},
Err(e) => tracing::warn!("initial_scan: `stat`ing {:?}: {:?}", &env.path, e),
}
}
fn run(
t: &mut Activation,
config_ds: Arc<Cap>,
spec: internal_services::ConfigWatcher,
) -> ActorResult {
lifecycle::terminate_on_service_restart(t, &config_ds, &spec);
let path = fs::canonicalize(spec.path.clone())?;
let env = script::Env::new(path, spec.env.0.clone());
tracing::info!(?env);
let (tx, rx) = channel();
let mut watcher = watcher(tx, Duration::from_millis(100)).map_err(convert_notify_error)?;
watcher.watch(&env.path, RecursiveMode::Recursive).map_err(convert_notify_error)?;
let facet = t.facet_ref();
let trace_collector = t.trace_collector();
let span = tracing::Span::current();
thread::spawn(move || {
let _entry = span.enter();
let mut path_state: Map<PathBuf, FacetId> = Map::new();
{
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("initial_scan"));
let account = Account::new(Some(AnyValue::symbol("initial_scan")), trace_collector.clone());
if !facet.activate(
&account, cause, |t| {
initial_scan(t, &mut path_state, &config_ds, env.clone());
config_ds.assert(t, language(), &lifecycle::ready(&spec));
Ok(())
})
{
return;
}
}
tracing::trace!("initial_scan complete");
let mut rescan = |paths: Vec<PathBuf>| {
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("rescan"));
let account = Account::new(Some(AnyValue::symbol("rescan")), trace_collector.clone());
facet.activate(&account, cause, |t| {
let mut to_stop = Vec::new();
for path in paths.into_iter() {
let maybe_facet_id = path_state.remove(&path);
let new_content_ok =
scan_file(t, &mut path_state, env.clone_with_path(path.clone()));
if let Some(old_facet_id) = maybe_facet_id {
if new_content_ok {
to_stop.push(old_facet_id);
} else {
path_state.insert(path, old_facet_id);
}
}
}
for facet_id in to_stop.into_iter() {
t.stop_facet(facet_id);
}
Ok(())
})
};
while let Ok(event) = rx.recv() {
tracing::trace!("notification: {:?}", &event);
let keep_running = match event {
DebouncedEvent::NoticeWrite(_p) |
DebouncedEvent::NoticeRemove(_p) =>
true,
DebouncedEvent::Create(p) |
DebouncedEvent::Write(p) |
DebouncedEvent::Chmod(p) |
DebouncedEvent::Remove(p) =>
rescan(vec![p]),
DebouncedEvent::Rename(p, q) =>
rescan(vec![p, q]),
_ => {
tracing::info!("{:?}", event);
true
}
};
if !keep_running { break; }
}
{
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("termination"));
let account = Account::new(Some(AnyValue::symbol("termination")), trace_collector);
facet.activate(&account, cause, |t| {
tracing::trace!("linked thread terminating associated facet");
Ok(t.stop())
});
}
tracing::trace!("linked thread done");
});
t.linked_task(Some(AnyValue::symbol("cancel-wait")), async move {
future::pending::<()>().await;
drop(watcher);
Ok(LinkedTaskTermination::KeepFacet)
});
Ok(())
}

View File

@ -0,0 +1,475 @@
use preserves_schema::Codec;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::preserves::rec;
use syndicate::schemas::service;
use syndicate::supervise::{Supervisor, SupervisorConfiguration};
use syndicate::trace;
use syndicate::value::NestedValue;
use tokio::io::AsyncRead;
use tokio::io::AsyncBufReadExt;
use tokio::io::BufReader;
use tokio::process;
use crate::counter;
use crate::language::language;
use crate::lifecycle;
use crate::schemas::external_services::*;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, config_ds: Arc<Cap>, root_ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("daemon_listener")), move |t| {
Ok(during!(t, config_ds, language(), <run-service $spec: DaemonService::<AnyValue>>,
enclose!((config_ds, root_ds) move |t: &mut Activation| {
supervise_daemon(t, config_ds, root_ds, spec)
})))
});
}
fn supervise_daemon(
t: &mut Activation,
config_ds: Arc<Cap>,
root_ds: Arc<Cap>,
spec: DaemonService,
) -> ActorResult {
t.facet(|t| {
lifecycle::on_service_restart(t, &config_ds, &spec, enclose!(
(config_ds, root_ds, spec) move |t| {
tracing::info!(id = ?spec.id, "Terminating to restart");
t.stop_facet_and_continue(t.facet_id(), Some(
enclose!((config_ds, root_ds, spec) move |t: &mut Activation| {
supervise_daemon(t, config_ds, root_ds, spec)
})))
}));
Supervisor::start(
t,
Some(language().unparse(&spec)),
SupervisorConfiguration::on_error_only(),
enclose!((config_ds, spec) lifecycle::updater(config_ds, spec)),
enclose!((config_ds, root_ds) move |t|
enclose!((config_ds, root_ds, spec) run(t, config_ds, root_ds, spec))))
})?;
Ok(())
}
impl Process {
fn elaborate(self) -> FullProcess {
match self {
Process::Simple(command_line) => FullProcess {
argv: *command_line,
env: ProcessEnv::Absent,
dir: ProcessDir::Absent,
clear_env: ClearEnv::Absent,
},
Process::Full(spec) => *spec,
}
}
}
impl FullProcess {
fn build_command(&self) -> Option<process::Command> {
let argv = self.argv.clone().elaborate();
let mut cmd = process::Command::new(argv.program);
cmd.args(argv.args);
match &self.dir {
ProcessDir::Present { dir } => { cmd.current_dir(dir); () },
ProcessDir::Absent => (),
ProcessDir::Invalid { dir } => {
tracing::error!(?dir, "Invalid working directory");
return None;
}
}
match &self.clear_env {
ClearEnv::Present { clear_env: true } => { cmd.env_clear(); () },
ClearEnv::Present { clear_env: false } => (),
ClearEnv::Absent => (),
ClearEnv::Invalid { clear_env } => {
tracing::error!(?clear_env, "Invalid clearEnv setting");
return None;
}
}
match &self.env {
ProcessEnv::Present { env } => {
for (k, v) in env {
if let Some(env_variable) = match k {
EnvVariable::String(k) => Some(k),
EnvVariable::Symbol(k) => Some(k),
EnvVariable::Invalid(env_variable) => {
tracing::error!(?env_variable,
"Invalid environment variable name");
return None;
}
} {
match v {
EnvValue::Set(value) => { cmd.env(env_variable, value); () }
EnvValue::Remove => { cmd.env_remove(env_variable); () }
EnvValue::Invalid(value) => {
tracing::error!(?env_variable, ?value,
"Invalid environment variable value");
return None;
}
}
}
}
}
ProcessEnv::Absent => (),
ProcessEnv::Invalid { env } => {
tracing::error!(?env, "Invalid daemon environment");
return None;
}
}
cmd.kill_on_drop(true);
Some(cmd)
}
}
impl DaemonProcessSpec {
fn elaborate(self) -> FullDaemonProcess {
match self {
DaemonProcessSpec::Simple(command_line) => FullDaemonProcess {
process: Process::Simple(command_line).elaborate(),
ready_on_start: ReadyOnStart::Absent,
restart: RestartField::Absent,
protocol: ProtocolField::Absent,
},
DaemonProcessSpec::OneShot { setup } => FullDaemonProcess {
process: Process::Simple(setup).elaborate(),
ready_on_start: ReadyOnStart::Present { ready_on_start: false },
restart: RestartField::Present { restart: Box::new(RestartPolicy::OnError) },
protocol: ProtocolField::Absent,
},
DaemonProcessSpec::Full(spec) => *spec,
}
}
}
impl CommandLine {
fn elaborate(self) -> FullCommandLine {
match self {
CommandLine::Shell(s) => FullCommandLine {
program: "sh".to_owned(),
args: vec!["-c".to_owned(), s],
},
CommandLine::Full(command_line) => *command_line,
}
}
}
struct DaemonInstance {
config_ds: Arc<Cap>,
log_ds: Arc<Cap>,
service: AnyValue,
cmd: process::Command,
announce_presumed_readiness: bool,
unready_configs: Arc<Field<isize>>,
completed_processes: Arc<Field<isize>>,
restart_policy: RestartPolicy,
protocol: Protocol,
}
impl DaemonInstance {
fn handle_exit(self, t: &mut Activation, error_message: Option<String>) -> ActorResult {
let delay =
std::time::Duration::from_millis(if let None = error_message { 200 } else { 1000 });
t.stop_facet_and_continue(t.facet_id(), Some(move |t: &mut Activation| {
#[derive(Debug)]
enum NextStep {
SleepAndRestart,
SignalSuccessfulCompletion,
}
use NextStep::*;
let next_step = match self.restart_policy {
RestartPolicy::Always => SleepAndRestart,
RestartPolicy::OnError =>
match &error_message {
None => SignalSuccessfulCompletion,
Some(_) => SleepAndRestart,
},
RestartPolicy::All =>
match &error_message {
None => SignalSuccessfulCompletion,
Some(s) => {
tracing::error!(cmd = ?self.cmd, next_step = %"RestartDaemon", message = %s);
Err(s.as_str())?
}
},
RestartPolicy::Never => SignalSuccessfulCompletion,
};
match error_message {
Some(m) => tracing::error!(cmd = ?self.cmd, ?next_step, message = %m),
None => tracing::info!(cmd = ?self.cmd, ?next_step),
}
match next_step {
SleepAndRestart => t.after(delay, |t| self.start(t)),
SignalSuccessfulCompletion => {
t.facet(|t| {
let _ = t.prevent_inert_check();
counter::adjust(t, &self.completed_processes, 1);
Ok(())
})?;
()
}
}
Ok(())
}))
}
fn log<R: 'static + Send + AsyncRead + Unpin>(
&self,
t: &mut Activation,
pid: Option<u32>,
r: R,
kind: &str
) -> ActorResult {
t.facet(|t| {
let facet = t.facet_ref();
let log_ds = self.log_ds.clone();
let service = self.service.clone();
let kind = AnyValue::symbol(kind);
let pid = match pid {
Some(n) => AnyValue::new(n),
None => AnyValue::symbol("unknown"),
};
let trace_collector = t.trace_collector();
t.linked_task(None, async move {
let mut r = BufReader::new(r);
let cause = trace_collector.as_ref().map(
|_| trace::TurnCause::external(kind.value().as_symbol().unwrap()));
let account = Account::new(None, trace_collector);
loop {
let mut buf = Vec::new();
match r.read_until(b'\n', &mut buf).await {
Ok(0) | Err(_) => break,
Ok(_) => (),
}
let buf = match std::str::from_utf8(&buf) {
Ok(s) => AnyValue::new(s),
Err(_) => AnyValue::bytestring(buf),
};
let now = AnyValue::new(chrono::Utc::now().to_rfc3339());
if !facet.activate(
&account, cause.clone(), enclose!((pid, service, kind) |t| {
log_ds.message(t, &(), &syndicate_macros::template!(
"<log =now {
pid: =pid,
service: =service,
stream: =kind,
line: =buf,
}>"));
Ok(())
}))
{
break;
}
}
Ok(LinkedTaskTermination::Normal)
});
Ok(())
})?;
Ok(())
}
fn start(mut self, t: &mut Activation) -> ActorResult {
t.facet(|t| {
tracing::trace!(cmd = ?self.cmd, "starting");
let mut child = match self.cmd.spawn() {
Ok(child) => child,
Err(e) => {
tracing::debug!(spawn_err = ?e);
return self.handle_exit(t, Some(format!("{}", e)));
}
};
let pid = child.id();
tracing::debug!(?pid, cmd = ?self.cmd, "started");
let facet = t.facet_ref();
if let Some(r) = child.stderr.take() {
self.log(t, pid, r, "stderr")?;
}
match self.protocol {
Protocol::TextSyndicate => self.relay_facet(t, &mut child, true)?,
Protocol::BinarySyndicate => self.relay_facet(t, &mut child, false)?,
Protocol::None => {
if let Some(r) = child.stdout.take() {
self.log(t, pid, r, "stdout")?;
}
}
}
if self.announce_presumed_readiness {
counter::adjust(t, &self.unready_configs, -1);
}
let trace_collector = t.trace_collector();
t.linked_task(
Some(rec![AnyValue::symbol("wait"), self.service.clone()]),
enclose!((facet) async move {
tracing::trace!("waiting for process exit");
let status = child.wait().await?;
tracing::debug!(?status);
let cause = trace_collector.as_ref().map(
|_| trace::TurnCause::external("instance-terminated"));
let account = Account::new(Some(AnyValue::symbol("instance-terminated")), trace_collector);
facet.activate(&account, cause, |t| {
let m = if status.success() { None } else { Some(format!("{}", status)) };
self.handle_exit(t, m)
});
Ok(LinkedTaskTermination::Normal)
}));
Ok(())
})?;
Ok(())
}
fn relay_facet(&self, t: &mut Activation, child: &mut process::Child, output_text: bool) -> ActorResult {
use syndicate::relay;
use syndicate::schemas::sturdy;
let to_child = child.stdin.take().expect("pipe to child");
let from_child = child.stdout.take().expect("pipe from child");
let i = relay::Input::Bytes(Box::pin(from_child));
let o = relay::Output::Bytes(Box::pin(to_child));
t.facet(|t| {
let cap = relay::TunnelRelay::run(t, i, o, None, Some(sturdy::Oid(0.into())), output_text)
.ok_or("initial capability reference unavailable")?;
tracing::info!(?cap);
self.config_ds.assert(t, language(), &service::ServiceObject {
service_name: self.service.clone(),
object: AnyValue::domain(cap),
});
Ok(())
})?;
Ok(())
}
}
fn run(
t: &mut Activation,
config_ds: Arc<Cap>,
root_ds: Arc<Cap>,
service: DaemonService,
) -> ActorResult {
let spec = language().unparse(&service);
let total_configs = t.named_field("total_configs", 0isize);
let unready_configs = t.named_field("unready_configs", 1isize);
let completed_processes = t.named_field("completed_processes", 0isize);
t.dataflow({
let mut handle = None;
let ready = lifecycle::ready(&spec);
enclose!((config_ds, unready_configs) move |t| {
let busy_count = *t.get(&unready_configs);
tracing::debug!(?busy_count);
config_ds.update(t, &mut handle, language(), if busy_count == 0 { Some(&ready) } else { None });
Ok(())
})
})?;
t.dataflow(enclose!((completed_processes, total_configs) move |t| {
let total = *t.get(&total_configs);
let completed = *t.get(&completed_processes);
tracing::debug!(total_configs = ?total, completed_processes = ?completed);
if total > 0 && total == completed {
t.stop();
}
Ok(())
}))?;
let trace_collector = t.trace_collector();
enclose!((config_ds, unready_configs, completed_processes)
during!(t, config_ds.clone(), language(), <daemon #(&service.id) $config>, {
enclose!((spec, config_ds, root_ds, unready_configs, completed_processes, trace_collector)
|t: &mut Activation| {
tracing::debug!(?config, "new config");
counter::adjust(t, &unready_configs, 1);
counter::adjust(t, &total_configs, 1);
match language().parse::<DaemonProcessSpec>(&config) {
Ok(config) => {
tracing::info!(?config);
let config = config.elaborate();
let facet = t.facet_ref();
t.linked_task(Some(AnyValue::symbol("subprocess")), async move {
let mut cmd = config.process.build_command().ok_or("Cannot start daemon process")?;
let announce_presumed_readiness = match config.ready_on_start {
ReadyOnStart::Present { ready_on_start } => ready_on_start,
ReadyOnStart::Absent => true,
ReadyOnStart::Invalid { ready_on_start } => {
tracing::error!(?ready_on_start, "Invalid readyOnStart value");
Err("Invalid readyOnStart value")?
}
};
let restart_policy = match config.restart {
RestartField::Present { restart } => *restart,
RestartField::Absent => RestartPolicy::Always,
RestartField::Invalid { restart } => {
tracing::error!(?restart, "Invalid restart value");
Err("Invalid restart value")?
}
};
let protocol = match config.protocol {
ProtocolField::Present { protocol } => *protocol,
ProtocolField::Absent => Protocol::None,
ProtocolField::Invalid { protocol } => {
tracing::error!(?protocol, "Invalid protocol value");
Err("Invalid protocol value")?
}
};
cmd.stdin(match &protocol {
Protocol::None =>
std::process::Stdio::null(),
Protocol::TextSyndicate | Protocol::BinarySyndicate =>
std::process::Stdio::piped(),
});
cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());
let daemon_instance = DaemonInstance {
config_ds,
log_ds: root_ds,
service: spec,
cmd,
announce_presumed_readiness,
unready_configs,
completed_processes,
restart_policy,
protocol,
};
let cause = trace_collector.as_ref().map(
|_| trace::TurnCause::external("instance-startup"));
let account = Account::new(Some(AnyValue::symbol("instance-startup")), trace_collector);
facet.activate(&account, cause, |t| {
daemon_instance.start(t)
});
Ok(LinkedTaskTermination::KeepFacet)
});
Ok(())
}
Err(_) => {
tracing::error!(?config, "Invalid Process specification");
return Ok(());
}
}
})
}));
tracing::debug!("syncing to ds");
counter::sync_and_adjust(t, &config_ds.underlying, &unready_configs, -1);
Ok(())
}

View File

@ -0,0 +1,65 @@
use preserves_schema::Codec;
use std::sync::Arc;
use std::sync::atomic::Ordering;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::preserves::rec;
use syndicate::preserves::value::NestedValue;
use crate::language::language;
use crate::lifecycle;
use crate::schemas::internal_services::DebtReporter;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("debt_reporter_listener")), move |t| {
Ok(during!(t, ds, language(), <run-service $spec: DebtReporter>, |t: &mut Activation| {
t.spawn_link(Some(rec![AnyValue::symbol("debt_reporter"), language().unparse(&spec)]),
enclose!((ds) |t| run(t, ds, spec)));
Ok(())
}))
});
}
fn run(t: &mut Activation, ds: Arc<Cap>, spec: DebtReporter) -> ActorResult {
ds.assert(t, language(), &lifecycle::started(&spec));
ds.assert(t, language(), &lifecycle::ready(&spec));
t.every(core::time::Duration::from_millis((spec.interval_seconds.0 * 1000.0) as u64), |_t| {
for (account_id, (name, debt)) in syndicate::actor::ACCOUNTS.read().iter() {
tracing::info!(account_id, ?name, debt = ?debt.load(Ordering::Relaxed));
}
// let snapshot = syndicate::actor::ACTORS.read().clone();
// for (id, (name, ac_ref)) in snapshot.iter() {
// if *id == _t.state.actor_id {
// tracing::debug!("skipping report on the reporting actor, to avoid deadlock");
// continue;
// }
// tracing::trace!(?id, "about to lock");
// tracing::info_span!("actor", id, ?name).in_scope(|| match &*ac_ref.state.lock() {
// ActorState::Terminated { exit_status } =>
// tracing::info!(?exit_status, "terminated"),
// ActorState::Running(state) => {
// tracing::info!(field_count = ?state.fields.len(),
// outbound_assertion_count = ?state.outbound_assertions.len(),
// facet_count = ?state.facet_nodes.len());
// tracing::info_span!("facets").in_scope(|| {
// for (facet_id, f) in state.facet_nodes.iter() {
// tracing::info!(
// ?facet_id,
// parent_id = ?f.parent_facet_id,
// outbound_handle_count = ?f.outbound_handles.len(),
// linked_task_count = ?f.linked_tasks.len(),
// inert_check_preventers = ?f.inert_check_preventers.load(Ordering::Relaxed));
// }
// });
// }
// });
// }
Ok(())
})
}

View File

@ -0,0 +1,39 @@
use preserves_schema::Codec;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::preserves::rec;
use syndicate::preserves::value::NestedValue;
use crate::gatekeeper;
use crate::language::Language;
use crate::language::language;
use crate::lifecycle;
use crate::schemas::internal_services::Gatekeeper;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("gatekeeper_listener")), move |t| {
Ok(during!(t, ds, language(), <run-service $spec: Gatekeeper::<AnyValue>>, |t: &mut Activation| {
t.spawn_link(Some(rec![AnyValue::symbol("gatekeeper"), language().unparse(&spec)]),
enclose!((ds) |t| run(t, ds, spec)));
Ok(())
}))
});
}
fn run(t: &mut Activation, ds: Arc<Cap>, spec: Gatekeeper<AnyValue>) -> ActorResult {
let resolver = t.create(syndicate::entity(Arc::clone(&spec.bindspace))
.on_asserted_facet(gatekeeper::facet_handle_resolve));
ds.assert(t, language(), &syndicate::schemas::service::ServiceObject {
service_name: language().unparse(&spec),
object: AnyValue::domain(Cap::guard(Language::arc(), resolver)),
});
gatekeeper::handle_binds(t, &spec.bindspace)?;
ds.assert(t, language(), &lifecycle::started(&spec));
ds.assert(t, language(), &lifecycle::ready(&spec));
Ok(())
}

View File

@ -0,0 +1,348 @@
use preserves_schema::Codec;
use std::convert::TryFrom;
use std::io::Read;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::error::Error;
use syndicate::preserves::rec;
use syndicate::preserves::value::Map;
use syndicate::preserves::value::NestedValue;
use syndicate::schemas::http;
use syndicate::value::signed_integer::SignedInteger;
use crate::language::language;
use crate::lifecycle;
use crate::schemas::internal_services::HttpRouter;
use crate::schemas::internal_services::HttpStaticFileServer;
use syndicate_macros::during;
lazy_static::lazy_static! {
pub static ref MIME_TABLE: Map<String, String> = load_mime_table("/etc/mime.types").unwrap_or_default();
}
pub fn load_mime_table(path: &str) -> Result<Map<String, String>, std::io::Error> {
let mut table = Map::new();
let file = std::fs::read_to_string(path)?;
for line in file.split('\n') {
if line.starts_with('#') {
continue;
}
let pieces = line.split(&[' ', '\t'][..]).collect::<Vec<&str>>();
for i in 1..pieces.len() {
table.insert(pieces[i].to_string(), pieces[0].to_string());
}
}
Ok(table)
}
pub fn on_demand(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("http_router_listener")), move |t| {
enclose!((ds) during!(t, ds, language(), <run-service $spec: HttpRouter::<AnyValue>>, |t: &mut Activation| {
t.spawn_link(Some(rec![AnyValue::symbol("http_router"), language().unparse(&spec)]),
enclose!((ds) |t| run(t, ds, spec)));
Ok(())
}));
enclose!((ds) during!(t, ds, language(), <run-service $spec: HttpStaticFileServer>, |t: &mut Activation| {
t.spawn_link(Some(rec![AnyValue::symbol("http_static_file_server"), language().unparse(&spec)]),
enclose!((ds) |t| run_static_file_server(t, ds, spec)));
Ok(())
}));
Ok(())
});
}
#[derive(Debug, Clone)]
struct ActiveHandler {
cap: Arc<Cap>,
terminated: Arc<Field<bool>>,
}
type MethodTable = Map<http::MethodPattern, Vec<ActiveHandler>>;
type HostTable = Map<http::HostPattern, Map<http::PathPattern, MethodTable>>;
type RoutingTable = Map<SignedInteger, HostTable>;
fn request_host(value: &http::RequestHost) -> Option<String> {
match value {
http::RequestHost::Present(h) => Some(h.to_owned()),
http::RequestHost::Absent => None,
}
}
fn run(t: &mut Activation, ds: Arc<Cap>, spec: HttpRouter) -> ActorResult {
ds.assert(t, language(), &lifecycle::started(&spec));
ds.assert(t, language(), &lifecycle::ready(&spec));
let httpd = spec.httpd;
let routes: Arc<Field<RoutingTable>> = t.named_field("routes", Map::new());
enclose!((httpd, routes) during!(t, httpd, language(), <http-bind _ $port _ _ _>, |t: &mut Activation| {
let port1 = port.clone();
enclose!((httpd, routes) during!(t, httpd, language(), <http-listener #(&port1)>, enclose!((routes, port) |t: &mut Activation| {
let port2 = port.clone();
during!(t, httpd, language(), <http-bind $host #(&port2) $method $path $handler>, |t: &mut Activation| {
tracing::debug!("+HTTP binding {:?} {:?} {:?} {:?} {:?}", host, port, method, path, handler);
let port = port.value().to_signedinteger()?;
let host = language().parse::<http::HostPattern>(&host)?;
let path = language().parse::<http::PathPattern>(&path)?;
let method = language().parse::<http::MethodPattern>(&method)?;
let handler_cap = handler.value().to_embedded()?.clone();
let handler_terminated = t.named_field("handler-terminated", false);
t.get_mut(&routes)
.entry(port.clone()).or_default()
.entry(host.clone()).or_default()
.entry(path.clone()).or_default()
.entry(method.clone()).or_default()
.push(ActiveHandler {
cap: handler_cap.clone(),
terminated: handler_terminated,
});
t.on_stop(enclose!((routes, method, path, host, port) move |t| {
tracing::debug!("-HTTP binding {:?} {:?} {:?} {:?} {:?}", host, port, method, path, handler);
let port_map = t.get_mut(&routes);
let host_map = port_map.entry(port.clone()).or_default();
let path_map = host_map.entry(host.clone()).or_default();
let method_map = path_map.entry(path.clone()).or_default();
let handler_vec = method_map.entry(method.clone()).or_default();
let handler = {
let i = handler_vec.iter().position(|a| a.cap == handler_cap)
.expect("Expected an index of an active handler to remove");
handler_vec.swap_remove(i)
};
if handler_vec.is_empty() {
method_map.remove(&method);
}
if method_map.is_empty() {
path_map.remove(&path);
}
if path_map.is_empty() {
host_map.remove(&host);
}
if host_map.is_empty() {
port_map.remove(&port);
}
*t.get_mut(&handler.terminated) = true;
Ok(())
}));
Ok(())
});
Ok(())
})));
Ok(())
}));
during!(t, httpd, language(), <request $req $res>, |t: &mut Activation| {
let req = match language().parse::<http::HttpRequest>(&req) { Ok(v) => v, Err(_) => return Ok(()) };
let res = match res.value().to_embedded() { Ok(v) => v, Err(_) => return Ok(()) };
tracing::trace!("Looking up handler for {:#?} in {:#?}", &req, &t.get(&routes));
let host_map = match t.get(&routes).get(&req.port) {
Some(host_map) => host_map,
None => return send_empty(t, res, 404, "Not found"),
};
let methods = match request_host(&req.host).and_then(|h| try_hostname(host_map, http::HostPattern::Host(h), &req.path).transpose()).transpose()? {
Some(methods) => methods,
None => match try_hostname(host_map, http::HostPattern::Any, &req.path)? {
Some(methods) => methods,
None => return send_empty(t, res, 404, "Not found"),
}
};
let handlers = match methods.get(&http::MethodPattern::Specific(req.method.clone())) {
Some(handlers) => handlers,
None => match methods.get(&http::MethodPattern::Any) {
Some(handlers) => handlers,
None => {
let allowed = methods.keys().map(|k| match k {
http::MethodPattern::Specific(m) => m.to_uppercase(),
http::MethodPattern::Any => unreachable!(),
}).collect::<Vec<String>>().join(", ");
res.message(t, language(), &http::HttpResponse::Status {
code: 405.into(), message: "Method Not Allowed".into() });
res.message(t, language(), &http::HttpResponse::Header {
name: "allow".into(), value: allowed });
return send_done(t, res);
}
}
};
if handlers.len() > 1 {
tracing::warn!(?req, "Too many handlers available");
}
let ActiveHandler { cap, terminated } = handlers.first().expect("Nonempty handler set").clone();
tracing::trace!("Handler for {:?} is {:?}", &req, &cap);
t.dataflow(enclose!((terminated, req, res) move |t| {
if *t.get(&terminated) {
tracing::trace!("Handler for {:?} terminated", &req);
send_empty(t, &res, 500, "Internal Server Error")?;
}
Ok(())
}))?;
cap.assert(t, language(), &http::HttpContext { req, res: res.clone() });
Ok(())
});
Ok(())
}
fn send_done(t: &mut Activation, res: &Arc<Cap>) -> ActorResult {
res.message(t, language(), &http::HttpResponse::Done {
chunk: Box::new(http::Chunk::Bytes(vec![])) });
Ok(())
}
fn send_empty(t: &mut Activation, res: &Arc<Cap>, code: u16, message: &str) -> ActorResult {
res.message(t, language(), &http::HttpResponse::Status {
code: code.into(), message: message.into() });
send_done(t, res)
}
fn path_pattern_matches(path_pat: &http::PathPattern, path: &Vec<String>) -> bool {
let mut path_iter = path.iter();
for pat_elem in path_pat.0.iter() {
match pat_elem {
http::PathPatternElement::Label(v) => match path_iter.next() {
Some(path_elem) => {
if v != path_elem {
return false;
}
}
None => return false,
},
http::PathPatternElement::Wildcard => match path_iter.next() {
Some(_) => (),
None => return false,
},
http::PathPatternElement::Rest => return true,
}
}
match path_iter.next() {
Some(_more) => false,
None => true,
}
}
fn try_hostname<'table>(
host_map: &'table HostTable,
host_pat: http::HostPattern,
path: &Vec<String>,
) -> Result<Option<&'table MethodTable>, Error> {
match host_map.get(&host_pat) {
None => Ok(None),
Some(path_table) => {
for (path_pat, method_table) in path_table.iter() {
tracing::trace!("Checking path {:?} against pattern {:?}", &path, &path_pat);
if path_pattern_matches(path_pat, path) {
return Ok(Some(method_table));
}
}
Ok(None)
}
}
}
fn render_dir(path: std::path::PathBuf) -> Result<(Vec<u8>, Option<&'static str>), Error> {
let mut body = String::new();
for entry in std::fs::read_dir(&path)? {
if let Ok(entry) = entry {
let is_dir = entry.metadata().map(|m| m.is_dir()).unwrap_or(false);
let name = entry.file_name().to_string_lossy()
.replace('&', "&amp;")
.replace('<', "&lt;")
.replace('>', "&gt;")
.replace('\'', "&apos;")
.replace('"', "&quot;") + (if is_dir { "/" } else { "" });
body.push_str(&format!("<a href=\"{}\">{}</a><br>\n", name, name));
}
}
Ok((body.into_bytes(), Some("text/html")))
}
impl HttpStaticFileServer {
fn respond(&mut self, t: &mut Activation, req: &http::HttpRequest, res: &Arc<Cap>) -> ActorResult {
let path_prefix_elements = usize::try_from(&self.path_prefix_elements)
.map_err(|_| "Bad pathPrefixElements")?;
let mut is_index = false;
let mut path = req.path[path_prefix_elements..].iter().cloned().collect::<Vec<String>>();
if let Some(e) = path.last_mut() {
if e.len() == 0 {
*e = "index.html".into();
is_index = true;
}
}
let mut realpath = std::path::PathBuf::from(&self.dir);
for element in path.into_iter() {
if element.contains('/') || element.starts_with('.') { Err("Invalid path element")?; }
realpath.push(element);
}
let (body, mime_type) = match std::fs::File::open(&realpath) {
Err(_) => {
if is_index {
realpath.pop();
}
if std::fs::metadata(&realpath).is_ok_and(|m| m.is_dir()) {
render_dir(realpath)?
} else {
return send_empty(t, res, 404, "Not found")
}
},
Ok(mut fh) => {
if fh.metadata().is_ok_and(|m| m.is_dir()) {
drop(fh);
res.message(t, language(), &http::HttpResponse::Status {
code: 301.into(), message: "Moved permanently".into() });
res.message(t, language(), &http::HttpResponse::Header {
name: "location".into(), value: format!("/{}/", req.path.join("/")) });
return send_done(t, res);
} else {
let mut buf = Vec::new();
fh.read_to_end(&mut buf)?;
if let Some(extension) = realpath.extension().and_then(|e| e.to_str()) {
(buf, MIME_TABLE.get(extension).map(|m| m.as_str()))
} else {
(buf, None)
}
}
}
};
res.message(t, language(), &http::HttpResponse::Status {
code: 200.into(), message: "OK".into() });
if let Some(mime_type) = mime_type {
res.message(t, language(), &http::HttpResponse::Header {
name: "content-type".into(), value: mime_type.to_owned() });
}
res.message(t, language(), &http::HttpResponse::Done {
chunk: Box::new(http::Chunk::Bytes(body)) });
Ok(())
}
}
impl Entity<http::HttpContext<AnyValue>> for HttpStaticFileServer {
fn assert(&mut self, t: &mut Activation, assertion: http::HttpContext<AnyValue>, _handle: Handle) -> ActorResult {
let http::HttpContext { req, res } = assertion;
if let Err(e) = self.respond(t, &req, &res) {
tracing::error!(?req, error=?e);
send_empty(t, &res, 500, "Internal server error")?;
}
Ok(())
}
}
fn run_static_file_server(t: &mut Activation, ds: Arc<Cap>, spec: HttpStaticFileServer) -> ActorResult {
let object = Cap::guard(&language().syndicate, t.create(spec.clone()));
ds.assert(t, language(), &syndicate::schemas::service::ServiceObject {
service_name: language().unparse(&spec),
object: AnyValue::domain(object),
});
Ok(())
}

View File

@ -0,0 +1,7 @@
pub mod config_watcher;
pub mod daemon;
pub mod debt_reporter;
pub mod gatekeeper;
pub mod http_router;
pub mod tcp_relay_listener;
pub mod unix_relay_listener;

View File

@ -0,0 +1,120 @@
use preserves_schema::Codec;
use std::convert::TryFrom;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::preserves::rec;
use syndicate::preserves::value::NestedValue;
use syndicate::supervise::{Supervisor, SupervisorConfiguration};
use syndicate::trace;
use tokio::net::TcpListener;
use crate::language::language;
use crate::lifecycle;
use crate::protocol::detect_protocol;
use crate::schemas::internal_services::TcpWithoutHttp;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("tcp_relay_listener")), move |t| {
enclose!((ds) during!(t, ds, language(), <run-service $spec: TcpWithoutHttp::<AnyValue>>, |t| {
run_supervisor(t, ds.clone(), spec)
}));
Ok(())
});
}
fn run_supervisor(t: &mut Activation, ds: Arc<Cap>, spec: TcpWithoutHttp) -> ActorResult {
Supervisor::start(
t,
Some(rec![AnyValue::symbol("relay"), language().unparse(&spec)]),
SupervisorConfiguration::default(),
enclose!((ds, spec) lifecycle::updater(ds, spec)),
enclose!((ds) move |t| enclose!((ds, spec) run(t, ds, spec))))
}
fn run(t: &mut Activation, ds: Arc<Cap>, spec: TcpWithoutHttp) -> ActorResult {
lifecycle::terminate_on_service_restart(t, &ds, &spec);
let httpd = t.named_field("httpd", None::<Arc<Cap>>);
{
let ad = spec.addr.clone();
let ad2 = ad.clone();
let gk = spec.gatekeeper.clone();
enclose!((ds, httpd) during!(t, ds, language(),
<run-service <relay-listener #(&language().unparse(&ad)) #(&AnyValue::domain(gk)) $h>>, |t: &mut Activation| {
if let Some(h) = h.value().as_embedded().cloned() {
h.assert(t, language(), &syndicate::schemas::http::HttpListener { port: ad2.port.clone() });
*t.get_mut(&httpd) = Some(h.clone());
t.on_stop(enclose!((httpd) move |t| {
let f = t.get_mut(&httpd);
if *f == Some(h.clone()) { *f = None; }
Ok(())
}));
}
Ok(())
}));
}
let TcpWithoutHttp { addr, gatekeeper } = spec.clone();
let host = addr.host.clone();
let port = u16::try_from(&addr.port).map_err(|_| "Invalid TCP port number")?;
let facet = t.facet_ref();
let trace_collector = t.trace_collector();
t.linked_task(Some(AnyValue::symbol("listener")), async move {
let listen_addr = format!("{}:{}", host, port);
let listener = TcpListener::bind(listen_addr).await?;
{
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("readiness"));
let account = Account::new(Some(AnyValue::symbol("readiness")), trace_collector.clone());
if !facet.activate(
&account, cause, |t| {
tracing::info!("listening");
ds.assert(t, language(), &lifecycle::ready(&spec));
Ok(())
})
{
return Ok(LinkedTaskTermination::Normal);
}
}
loop {
let (stream, addr) = listener.accept().await?;
let gatekeeper = gatekeeper.clone();
let name = Some(rec![AnyValue::symbol("tcp"), AnyValue::new(format!("{}", &addr))]);
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("connect"));
let account = Account::new(name.clone(), trace_collector.clone());
if !facet.activate(
&account, cause, enclose!((trace_collector, httpd) move |t| {
let httpd = t.get(&httpd).clone();
t.spawn(name, move |t| {
Ok(t.linked_task(None, {
let facet = t.facet_ref();
async move {
detect_protocol(trace_collector,
facet,
stream,
gatekeeper,
httpd,
addr,
port).await?;
Ok(LinkedTaskTermination::KeepFacet)
}
}))
});
Ok(())
}))
{
return Ok(LinkedTaskTermination::Normal);
}
}
});
Ok(())
}

View File

@ -0,0 +1,120 @@
use preserves_schema::Codec;
use std::io;
use std::path::PathBuf;
use std::sync::Arc;
use syndicate::actor::*;
use syndicate::enclose;
use syndicate::error::Error;
use syndicate::preserves::rec;
use syndicate::preserves::value::NestedValue;
use syndicate::relay;
use syndicate::supervise::{Supervisor, SupervisorConfiguration};
use syndicate::trace;
use tokio::net::UnixListener;
use tokio::net::UnixStream;
use crate::language::language;
use crate::lifecycle;
use crate::protocol::run_connection;
use crate::schemas::internal_services::UnixRelayListener;
use syndicate_macros::during;
pub fn on_demand(t: &mut Activation, ds: Arc<Cap>) {
t.spawn(Some(AnyValue::symbol("unix_relay_listener")), move |t| {
Ok(during!(t, ds, language(), <run-service $spec: UnixRelayListener::<AnyValue>>, |t| {
Supervisor::start(
t,
Some(rec![AnyValue::symbol("relay"), language().unparse(&spec)]),
SupervisorConfiguration::default(),
enclose!((ds, spec) lifecycle::updater(ds, spec)),
enclose!((ds) move |t| enclose!((ds, spec) run(t, ds, spec))))
}))
});
}
fn run(t: &mut Activation, ds: Arc<Cap>, spec: UnixRelayListener) -> ActorResult {
lifecycle::terminate_on_service_restart(t, &ds, &spec);
let path_str = spec.addr.path.clone();
let facet = t.facet_ref();
let trace_collector = t.trace_collector();
t.linked_task(Some(AnyValue::symbol("listener")), async move {
let listener = bind_unix_listener(&PathBuf::from(path_str)).await?;
{
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("readiness"));
let account = Account::new(Some(AnyValue::symbol("readiness")), trace_collector.clone());
if !facet.activate(
&account, cause, |t| {
tracing::info!("listening");
ds.assert(t, language(), &lifecycle::ready(&spec));
Ok(())
})
{
return Ok(LinkedTaskTermination::Normal);
}
}
loop {
let (stream, _addr) = listener.accept().await?;
let peer = stream.peer_cred()?;
let gatekeeper = spec.gatekeeper.clone();
let name = Some(rec![AnyValue::symbol("unix"),
AnyValue::new(peer.pid().unwrap_or(-1)),
AnyValue::new(peer.uid())]);
let cause = trace_collector.as_ref().map(|_| trace::TurnCause::external("connect"));
let account = Account::new(name.clone(), trace_collector.clone());
if !facet.activate(
&account, cause, enclose!((trace_collector) move |t| {
t.spawn(name, |t| {
Ok(t.linked_task(None, {
let facet = t.facet_ref();
async move {
tracing::info!(protocol = %"unix");
let (i, o) = stream.into_split();
run_connection(trace_collector,
facet,
relay::Input::Bytes(Box::pin(i)),
relay::Output::Bytes(Box::pin(o)),
gatekeeper);
Ok(LinkedTaskTermination::KeepFacet)
}
}))
});
Ok(())
}))
{
return Ok(LinkedTaskTermination::Normal);
}
}
});
Ok(())
}
async fn bind_unix_listener(path: &PathBuf) -> Result<UnixListener, Error> {
match UnixListener::bind(path) {
Ok(s) => Ok(s),
Err(e) if e.kind() == io::ErrorKind::AddrInUse => {
// Potentially-stale socket file sitting around. Try
// connecting to it to see if it is alive, and remove it
// if not.
match UnixStream::connect(path).await {
Ok(_probe) => Err(e)?, // Someone's already there! Give up.
Err(f) if f.kind() == io::ErrorKind::ConnectionRefused => {
// Try to steal the socket.
tracing::debug!("Cleaning stale socket");
std::fs::remove_file(path)?;
Ok(UnixListener::bind(path)?)
}
Err(error) => {
tracing::error!(?error, "Problem while probing potentially-stale socket");
return Err(e)? // signal the *original* error, not the probe error
}
}
},
Err(e) => Err(e)?,
}
}

View File

@ -0,0 +1,23 @@
[package]
name = "syndicate-tools"
version = "0.18.0"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
description = "Syndicate command-line utilities."
homepage = "https://syndicate-lang.org/"
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
license = "Apache-2.0"
[dependencies]
preserves = "4.995"
syndicate = { path = "../syndicate", version = "0.40.0"}
clap = { version = "^4.0", features = ["derive"] }
clap_complete = "^4.0"
noise-protocol = "0.1"
noise-rust-crypto = "0.5"
[package.metadata.workspaces]
independent = true

View File

@ -0,0 +1,168 @@
use std::io;
use std::str::FromStr;
use clap::ArgGroup;
use clap::CommandFactory;
use clap::Parser;
use clap::Subcommand;
use clap::arg;
use clap_complete::{generate, Shell};
use noise_protocol::DH;
use noise_protocol::Hash;
use noise_rust_crypto::Blake2s;
use noise_rust_crypto::X25519;
use preserves::hex::HexParser;
use preserves::value::BytesBinarySource;
use preserves::value::NestedValue;
use preserves::value::NoEmbeddedDomainCodec;
use preserves::value::Reader;
use preserves::value::TextReader;
use preserves::value::ViaCodec;
use preserves::value::TextWriter;
use syndicate::language;
use syndicate::preserves_schema::Codec;
use syndicate::preserves_schema::ParseError;
use syndicate::schemas::noise;
use syndicate::sturdy::Caveat;
use syndicate::sturdy::SturdyRef;
use syndicate::sturdy::_Any;
#[derive(Clone, Debug)]
struct Preserves<N: NestedValue>(N);
#[derive(Subcommand, Debug)]
enum Action {
#[command(group(ArgGroup::new("key").required(true)))]
/// Generate a fresh SturdyRef from an OID value and a key
Mint {
#[arg(long, value_name="VALUE")]
/// Preserves value to use as SturdyRef OID
oid: Preserves<_Any>,
#[arg(long, group="key")]
/// Key phrase
phrase: Option<String>,
#[arg(long, group="key")]
/// Key bytes, encoded as hex
hex: Option<String>,
#[arg(long)]
/// Caveats to add
caveat: Vec<Preserves<_Any>>,
},
#[command(group(ArgGroup::new("key").required(true)))]
/// Generate a fresh NoiseServiceSpec from a service selector and a key
Noise {
#[arg(long, value_name="VALUE")]
/// Preserves value to use as the service selector
service: Preserves<_Any>,
#[arg(long, value_name="PROTOCOL")]
/// Noise handshake protocol name
protocol: Option<String>,
#[arg(long, group="key")]
/// Key phrase
phrase: Option<String>,
#[arg(long, group="key")]
/// Key bytes, encoded as hex
hex: Option<String>,
#[arg(long, group="key")]
/// Generate a random key
random: bool,
},
/// Emit shell completion code
Completions {
/// Shell dialect to generate
shell: Shell,
}
}
#[derive(Parser, Debug)]
#[command(version)]
struct Cli {
#[command(subcommand)]
action: Action,
}
impl<N: NestedValue> FromStr for Preserves<N> {
type Err = ParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Preserves(TextReader::new(&mut BytesBinarySource::new(s.as_bytes()),
ViaCodec::new(NoEmbeddedDomainCodec)).demand_next(false)?))
}
}
fn main() -> io::Result<()> {
let args = <Cli as Parser>::parse();
match args.action {
Action::Completions { shell } => {
let mut cmd = <Cli as CommandFactory>::command();
let name = cmd.get_name().to_string();
generate(shell, &mut cmd, name, &mut io::stdout());
}
Action::Noise { service, protocol, phrase, hex, random } => {
let key =
if random {
X25519::genkey()
} else if let Some(hex) = hex {
let mut hash = Blake2s::default();
hash.input(hex.as_bytes());
hash.result()
} else if let Some(phrase) = phrase {
let mut hash = Blake2s::default();
hash.input(phrase.as_bytes());
hash.result()
} else {
unreachable!()
};
let n = noise::NoiseServiceSpec {
base: noise::NoiseSpec {
key: X25519::pubkey(&key).to_vec(),
service: noise::ServiceSelector(service.0),
pre_shared_keys: noise::NoisePreSharedKeys::Absent,
protocol: if let Some(p) = protocol {
noise::NoiseProtocol::Present { protocol: p }
} else {
noise::NoiseProtocol::Absent
},
},
secret_key: noise::SecretKeyField::Present {
secret_key: key.to_vec(),
},
};
println!("{}", TextWriter::encode(&mut NoEmbeddedDomainCodec,
&language().unparse(&n))?);
}
Action::Mint { oid, phrase, hex, caveat: caveats } => {
let key =
if let Some(hex) = hex {
HexParser::Liberal.decode(&hex).expect("hex encoded sturdyref")
} else if let Some(phrase) = phrase {
phrase.as_bytes().to_owned()
} else {
unreachable!()
};
let attenuation = caveats.into_iter().map(|c| {
let r = language().parse(&c.0);
if let Ok(Caveat::Unknown(_)) = &r {
eprintln!("Warning: Unknown caveat format: {:?}", &c.0);
}
r
}).collect::<Result<Vec<Caveat>, _>>()?;
let m = SturdyRef::mint(oid.0, &key).attenuate(&attenuation)?;
println!("{}", TextWriter::encode(&mut NoEmbeddedDomainCodec,
&language().unparse(&m))?);
}
}
Ok(())
}

53
syndicate/Cargo.toml Normal file
View File

@ -0,0 +1,53 @@
[package]
name = "syndicate"
version = "0.40.1"
authors = ["Tony Garnock-Jones <tonyg@leastfixedpoint.com>"]
edition = "2018"
description = "Syndicated Actor model for Rust, including network communication and Dataspaces."
homepage = "https://syndicate-lang.org/"
repository = "https://git.syndicate-lang.org/syndicate-lang/syndicate-rs"
license = "Apache-2.0"
[features]
vendored-openssl = ["openssl/vendored"]
[build-dependencies]
preserves-schema = "5.995"
[dependencies]
preserves = "4.995"
preserves-schema = "5.995"
tokio = { version = "1.10", features = ["io-std", "io-util", "macros", "rt", "rt-multi-thread", "time"] }
tokio-util = "0.6"
bytes = "1.0"
futures = "0.3"
blake2 = "0.10"
getrandom = "0.2"
hmac = "0.12"
lazy_static = "1.4"
parking_lot = "0.11"
tracing = "0.1"
tracing-subscriber = "0.2"
tracing-futures = "0.2"
# Only used for vendored-openssl, which in turn is being used for cross-builds
openssl = { version = "0.10", optional = true }
[dev-dependencies]
criterion = "0.3"
[[bench]]
name = "bench_dataspace"
harness = false
[[bench]]
name = "ring"
harness = false
[package.metadata.workspaces]
independent = true

11
syndicate/Makefile Normal file
View File

@ -0,0 +1,11 @@
all: binary-debug
# cargo install cargo-watch
watch:
cargo watch -c -x check -x 'test -- --nocapture'
inotifytest:
inotifytest sh -c 'reset; cargo build && RUST_BACKTRACE=1 cargo test -- --nocapture'
binary-debug:
cargo build --all-targets

8
syndicate/README.md Normal file
View File

@ -0,0 +1,8 @@
This crate implements the
[Syndicated Actor model](https://syndicate-lang.org/about/) for Rust,
including
- intra-process communication (the [actor] module),
- point-to-point links between actor spaces (the [relay] module),
- and Dataspace objects (the [dataspace] module) for replicating
state and messages among interested parties.

View File

@ -0,0 +1,164 @@
use criterion::{criterion_group, criterion_main, Criterion};
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
use std::sync::atomic::Ordering;
use std::time::Instant;
use syndicate::language;
use syndicate::actor::*;
use syndicate::during::entity;
use syndicate::dataspace::Dataspace;
use syndicate::schemas::dataspace::Observe;
use syndicate::schemas::dataspace_patterns as p;
use syndicate::value::Map;
use syndicate::value::NestedValue;
use syndicate::value::Value;
use tokio::runtime::Runtime;
use tracing::Level;
#[inline]
fn says(who: AnyValue, what: AnyValue) -> AnyValue {
let mut r = Value::simple_record("Says", 2);
r.fields_vec_mut().push(who);
r.fields_vec_mut().push(what);
r.finish().wrap()
}
struct ShutdownEntity;
impl Entity<AnyValue> for ShutdownEntity {
fn message(&mut self, t: &mut Activation, _m: AnyValue) -> ActorResult {
Ok(t.stop())
}
}
pub fn bench_pub(c: &mut Criterion) {
let filter = tracing_subscriber::filter::EnvFilter::from_default_env()
.add_directive(tracing_subscriber::filter::LevelFilter::INFO.into());
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_ansi(true)
.with_max_level(Level::TRACE)
.with_env_filter(filter)
.finish();
tracing::subscriber::set_global_default(subscriber)
.expect("Could not set tracing global subscriber");
let rt = Runtime::new().unwrap();
c.bench_function("publication alone", |b| {
b.iter_custom(|iters| {
let start = Instant::now();
rt.block_on(async move {
Actor::top(None, move |t| {
let _ = t.prevent_inert_check();
// The reason this works is that all the messages to `ds` will be delivered
// before the message to `shutdown`, because `ds` and `shutdown` are in the
// same Actor.
let ds = t.create(Dataspace::new(None));
let shutdown = t.create(ShutdownEntity);
for _ in 0..iters {
t.message(&ds, says(AnyValue::new("bench_pub"),
Value::ByteString(vec![]).wrap()));
}
t.message(&shutdown, AnyValue::new(true));
Ok(())
}).await.unwrap().unwrap();
});
start.elapsed()
})
});
c.bench_function("publication and subscription", |b| {
b.iter_custom(|iters| {
let start = Instant::now();
rt.block_on(async move {
let turn_count = Arc::new(AtomicU64::new(0));
Actor::top(None, {
let iters = iters.clone();
let turn_count = Arc::clone(&turn_count);
move |t| {
let ds = Cap::new(&t.create(Dataspace::new(None)));
let shutdown = entity(())
.on_asserted(|_, _, _| Ok(Some(Box::new(|_, t| Ok(t.stop())))))
.create_cap(t);
ds.assert(t, language(), &Observe {
pattern: p::Pattern::Bind {
pattern: Box::new(p::Pattern::Lit {
value: Box::new(p::AnyAtom::Symbol("consumer".to_owned())),
}),
},
observer: shutdown,
});
t.spawn(Some(AnyValue::symbol("consumer")), move |t| {
struct Receiver(Arc<AtomicU64>);
impl Entity<AnyValue> for Receiver {
fn message(&mut self, _t: &mut Activation, _m: AnyValue) -> ActorResult {
self.0.fetch_add(1, Ordering::Relaxed);
Ok(())
}
}
let shutdown = Cap::new(&t.create(ShutdownEntity));
let receiver = Cap::new(&t.create(Receiver(Arc::clone(&turn_count))));
ds.assert(t, &(), &AnyValue::symbol("consumer"));
ds.assert(t, language(), &Observe {
pattern: p::Pattern::Group {
type_: Box::new(p::GroupType::Rec {
label: AnyValue::symbol("Says"),
}),
entries: Map::from([
(p::_Any::new(0), p::Pattern::Lit {
value: Box::new(p::AnyAtom::String("bench_pub".to_owned())),
}),
(p::_Any::new(1), p::Pattern::Bind {
pattern: Box::new(p::Pattern::Discard),
}),
]),
},
observer: receiver,
});
ds.assert(t, language(), &Observe {
pattern: p::Pattern::Bind {
pattern: Box::new(p::Pattern::Lit {
value: Box::new(p::AnyAtom::Bool(true)),
}),
},
observer: shutdown,
});
t.after(core::time::Duration::from_secs(0), move |t| {
for _i in 0..iters {
ds.message(t, &(), &says(AnyValue::new("bench_pub"),
Value::ByteString(vec![]).wrap()));
}
ds.message(t, &(), &AnyValue::new(true));
Ok(())
});
Ok(())
});
Ok(())
}
}).await.unwrap().unwrap();
let actual_turns = turn_count.load(Ordering::SeqCst);
if actual_turns != iters {
panic!("Expected {}, got {} messages", iters, actual_turns);
}
});
start.elapsed()
})
});
}
criterion_group!(publish, bench_pub);
criterion_main!(publish);

145
syndicate/benches/ring.rs Normal file
View File

@ -0,0 +1,145 @@
use criterion::{criterion_group, criterion_main, Criterion};
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
use std::sync::atomic::Ordering;
use std::time::Duration;
use std::time::Instant;
use syndicate::actor::*;
use syndicate::preserves::rec;
use syndicate::value::NestedValue;
use tokio::runtime::Runtime;
static ACTORS_CREATED: AtomicU64 = AtomicU64::new(0);
static MESSAGES_SENT: AtomicU64 = AtomicU64::new(0);
pub fn bench_ring(c: &mut Criterion) {
syndicate::convenient_logging().unwrap();
let rt = Runtime::new().unwrap();
c.bench_function("Armstrong's Ring", |b| {
// "Write a ring benchmark. Create N processes in a ring. Send a message round the ring
// M times so that a total of N * M messages get sent. Time how long this takes for
// different values of N and M."
// -- Joe Armstrong, "Programming Erlang: Software for a Concurrent World"
//
// Here we fix N = 1000, and let `iters` take on the role of M.
//
b.iter_custom(|iters| {
const ACTOR_COUNT: u32 = 1000;
ACTORS_CREATED.store(0, Ordering::SeqCst);
MESSAGES_SENT.store(0, Ordering::SeqCst);
let (tx, rx) = std::sync::mpsc::sync_channel(1);
rt.block_on(async move {
struct Forwarder {
next: Arc<Ref<()>>,
}
struct Counter {
start: Instant,
tx: std::sync::mpsc::SyncSender<Duration>,
remaining_to_send: u64,
iters: u64,
next: Arc<Ref<()>>,
}
struct Spawner {
self_ref: Arc<Ref<Arc<Ref<()>>>>, // !
tx: std::sync::mpsc::SyncSender<Duration>,
iters: u64,
i: u32,
c: Arc<Ref<()>>,
}
impl Entity<()> for Forwarder {
fn message(&mut self, t: &mut Activation, _message: ()) -> ActorResult {
MESSAGES_SENT.fetch_add(1, Ordering::Relaxed);
t.message(&self.next, ());
Ok(())
}
}
impl Counter {
fn step(&mut self, t: &mut Activation) -> ActorResult {
if self.remaining_to_send > 0 {
self.remaining_to_send -= 1;
MESSAGES_SENT.fetch_add(1, Ordering::Relaxed);
t.message(&self.next, ());
} else {
tracing::info!(iters = self.iters,
actors_created = ACTORS_CREATED.load(Ordering::SeqCst),
messages_sent = MESSAGES_SENT.load(Ordering::SeqCst));
t.stop();
self.tx.send(self.start.elapsed() / ACTOR_COUNT).unwrap()
}
Ok(())
}
}
impl Entity<()> for Counter {
fn message(&mut self, t: &mut Activation, _message: ()) -> ActorResult {
self.step(t)
}
}
impl Spawner {
fn step(&mut self, t: &mut Activation, next: Arc<Ref<()>>) -> ActorResult {
if self.i < ACTOR_COUNT {
let i = self.i;
self.i += 1;
let spawner_ref = Arc::clone(&self.self_ref);
ACTORS_CREATED.fetch_add(1, Ordering::Relaxed);
t.spawn(
Some(rec![AnyValue::symbol("forwarder"), AnyValue::new(i)]),
move |t| {
let _ = t.prevent_inert_check();
let f = t.create(Forwarder {
next,
});
t.message(&spawner_ref, f);
Ok(())
});
} else {
let mut c_state = Counter {
start: Instant::now(),
tx: self.tx.clone(),
remaining_to_send: self.iters,
iters: self.iters,
next,
};
c_state.step(t)?;
self.c.become_entity(c_state);
}
Ok(())
}
}
impl Entity<Arc<Ref<()>>> for Spawner {
fn message(&mut self, t: &mut Activation, f: Arc<Ref<()>>) -> ActorResult {
self.step(t, f)
}
}
ACTORS_CREATED.fetch_add(1, Ordering::Relaxed);
Actor::top(None, move |t| {
let _ = t.prevent_inert_check();
let mut s = Spawner {
self_ref: t.create_inert(),
tx,
iters,
i: 1,
c: t.create_inert(),
};
s.step(t, Arc::clone(&s.c))?;
Arc::clone(&s.self_ref).become_entity(s);
Ok(())
}).await.unwrap().unwrap();
});
rx.recv().unwrap()
})
});
}
criterion_group!(ring, bench_ring);
criterion_main!(ring);

36
syndicate/build.rs Normal file
View File

@ -0,0 +1,36 @@
use preserves_schema::compiler::*;
mod syndicate_plugins {
use preserves_schema::compiler::*;
use preserves_schema::gen::schema::*;
// use preserves_schema::syntax::block::constructors::*;
#[derive(Debug)]
pub(super) struct PatternPlugin;
impl Plugin for PatternPlugin {
fn generate_definition(
&self,
_m: &mut context::ModuleContext,
_definition_name: &str,
_definition: &Definition,
) {
// TODO: Emit code for building instances of sturdy.Pattern and sturdy.Template
}
}
}
fn main() -> std::io::Result<()> {
let buildroot = std::path::PathBuf::from(std::env::var_os("OUT_DIR").unwrap());
let mut gen_dir = buildroot.clone();
gen_dir.push("src/schemas");
let mut c = CompilerConfig::new("crate::schemas".to_owned());
c.plugins.push(Box::new(syndicate_plugins::PatternPlugin));
c.add_external_module(ExternalModule::new(vec!["EntityRef".to_owned()], "crate::actor"));
let inputs = expand_inputs(&vec!["protocols/schema-bundle.bin".to_owned()])?;
c.load_schemas_and_bundles(&inputs, &vec![])?;
compile(&c, &mut CodeCollector::files(gen_dir))
}

18
syndicate/doc/actor.md Normal file
View File

@ -0,0 +1,18 @@
The [actor][crate::actor] module is the core of the Syndicated Actor model implementation.
Central features:
- struct [`Activation`], the API for programming a Syndicated Actor
object
- trait [`Entity`], the core protocol that must be implemented by
every object
- struct [`Facet`], a node in the tree of nested conversations that
an Actor is participating in
- type [`AnyValue`], the type of messages and assertions that can be
exchanged among distributed objects, including via
[dataspace][crate::dataspace]
- struct [`Ref<M>`], a reference to a local or remote object
- struct [`Cap`], a specialization of `Ref<M>` for
messages/assertions of type `AnyValue`
- struct [`Guard`], an adapter for converting an underlying
[`Ref<M>`] to a [`Cap`]

View File

@ -0,0 +1,76 @@
# Flow control
- struct [`Account`]
- struct [`LoanedItem`]
In order to handle high-speed scenarios where actors can become
overloaded by incoming events, this crate takes a
([possibly novel](https://syndicate-lang.org/journal/2021/09/02/internal-flow-control))
approach to *internal flow control* that is a variant of "credit-based
flow control" (as widely used in telephony systems).
The idea is to associate each individually-identifiable activity in an
actor system with an [*account*][Account] that records how much
outstanding work has to be done, system-wide, to fully complete
processing of that activity.
Each Actor scheduling new activities in response to some external
source (e.g while reading from a network socket in a
[linked task][Activation::linked_task]) calls
[`Account::ensure_clear_funds`] on its associated [`Account`]. This
will suspend the actor until enough of the account's "debt" has been
"cleared". (In the case of reading from a socket, this causes the TCP
socket's read buffers to fill up and the TCP window to close, which
throttles upstream senders.)
Every time any actor sends an event to any other actor, a
[`LoanedItem`] is constructed which "borrows" enough credit from some
nominated [`Account`] to cover the event. Crucially, when an actor is
*responding* to an event by *sending* more events, the account chosen
is *the one that the triggering event was charged to*. This lets the
server automatically account for fan-out of events.[^corollary]
Finally, once a `LoanedItem` is completely processed (i.e. when it is
[dropped][LoanedItem::drop]), its cost is "repaid" to its associated
account.
## Does it work?
Anecdotally, this approach appears to work well. Experimenting using
`syndicate-server` with producers sending as quickly as they can,
producers are throttled by the server, and the server seems stable
even though its consumers are not able to keep up with the unthrottled
send rate of each producer.
## Example
Imagine an actor *A* receiving publications from a TCP/IP socket. If
it ever "owes" more than, say, 5 units of cost on its account, it
stops reading from its socket until its debt decreases. Each message
it forwards on to another actor costs it 1 unit. Say a given incoming
message *M* is routed to a dataspace actor *D* (thereby charging *A*'s
account 1 unit), where it results in nine outbound events *M* to peer
actors *O1*···*O9*.
Then, when *D* receives *M*, 1 unit is repaid to *A*'s account. When
*D* sends *M* on to each of *O1*···*O9*, 1 unit is charged to *A*'s
account, resulting in a total of 9 units charged. At this point in
time, *A*'s account has had net +11+9 = 9 units withdrawn from it as
a result of *M*'s processing.
Imagine now that all of *O1*···*O9* are busy with other work. Then,
next time around *A*'s main loop, *A* notices that its outstanding
debt is higher than its configured threshold, and stops reading from
its socket. As each of *O1*···*O9* eventually gets around to
processing its copy of *M*, it repays the associated 1 unit to *A*'s
account.[^may-result-in-further-costs] Eventually, *A*'s account drops
below the threshold, *A* is woken up, and it resumes reading from its
socket.
[^corollary]: A corollary to this is that, for each event internal to
the system, you can potentially identify the "ultimate cause" of
the event: namely, the actor owning the associated account.
[^may-result-in-further-costs]: Of course, if *O1*, say, sends *more*
events internally as a result of receiving *M*, more units will
be charged to *A*'s account!

View File

@ -0,0 +1,3 @@
# Linked Tasks
- [Activation::linked_task]

View File

@ -0,0 +1,38 @@
# What is an Actor?
A [Syndicated Actor][Actor] ([Garnock-Jones 2017](#GarnockJones2017))
is a collection of stateful [Entities][Entity], organised in a tree of
[Facets][Facet], with each facet representing a
[(sub)conversation](https://syndicate-lang.org/about/#conversational-concurrency-1)
that the Actor is engaged in. Each entity belongs to exactly one
facet; each facet has exactly one parent and zero or more children;
each actor has exactly one associated root facet. When a facet is its
actor's root facet, its parent is the actor itself; otherwise, its
parent is always another facet.
In the taxonomy of De Koster *et al.* ([2016](#DeKoster2016)), the
Syndicated Actor model is a *Communicating Event-Loop* actor model,
similar to that offered by the E programming language
([Wikipedia](https://en.wikipedia.org/wiki/E_(programming_language));
[erights.org](http://erights.org/)).
- [Actor], [ActorRef], [Facet], [FacetRef], [ActorState], [Mailbox],
[Activation]
**References.**
- De Koster, Joeri, Tom Van Cutsem, and Wolfgang De Meuter. <a
name="DeKoster2016"
href="http://soft.vub.ac.be/Publications/2016/vub-soft-tr-16-11.pdf">“43
Years of Actors: A Taxonomy of Actor Models and Their Key
Properties.”</a> In Proc. AGERE, 3140. Amsterdam, The
Netherlands, 2016. [DOI](https://doi.org/10.1145/3001886.3001890).
[PDF](http://soft.vub.ac.be/Publications/2016/vub-soft-tr-16-11.pdf).
- Garnock-Jones, Tony. <a name="GarnockJones2017"
href="https://syndicate-lang.org/tonyg-dissertation/html/">“Conversational
Concurrency.”</a> PhD, Northeastern University, 2017.
[Permalink](http://hdl.handle.net/2047/D20261862).
[PDF@Northeastern](https://repository.library.northeastern.edu/files/neu:cj82qs441/fulltext.pdf).
[PDF@syndicate-lang.org](https://syndicate-lang.org/papers/conversational-concurrency-201712310922.pdf).
[HTML](https://syndicate-lang.org/tonyg-dissertation/html/).

View File

@ -0,0 +1,8 @@
all: schema-bundle.bin
clean:
rm -f schema-bundle.bin
schema-bundle.bin: schemas/*.prs
preserves-schemac schemas > $@.tmp
mv $@.tmp $@

View File

@ -0,0 +1,44 @@
´³bundle·µ³tcp„´³schema·³version°³ definitions·³TcpLocal´³rec´³lit³ tcp-local„´³tupleµ´³named³host´³atom³String„„´³named³port´³atom³ SignedInteger„„„„„³ TcpRemote´³rec´³lit³
tcp-remote„´³tupleµ´³named³host´³atom³String„„´³named³port´³atom³ SignedInteger„„„„„³ TcpPeerInfo´³rec´³lit³tcp-peer„´³tupleµ´³named³handle´³embedded³any„„´³named³local´³refµ„³TcpLocal„„´³named³remote´³refµ„³ TcpRemote„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³http„´³schema·³version°³ definitions·³Chunk´³orµµ±string´³atom³String„„µ±bytes´³atom³
ByteString„„„„³Headers´³dictof´³atom³Symbol„´³atom³String„„³MimeType´³atom³Symbol„³
QueryValue´³orµµ±string´³atom³String„„µ±file´³rec´³lit³file„´³tupleµ´³named³filename´³atom³String„„´³named³headers´³refµ„³Headers„„´³named³body´³atom³
ByteString„„„„„„„„³ HostPattern´³orµµ±host´³atom³String„„µ±any´³lit€„„„„³ HttpBinding´³rec´³lit³ http-bind„´³tupleµ´³named³host´³refµ„³ HostPattern„„´³named³port´³atom³ SignedInteger„„´³named³method´³refµ„³ MethodPattern„„´³named³path´³refµ„³ PathPattern„„´³named³handler´³embedded´³refµ„³ HttpRequest„„„„„„³ HttpContext´³rec´³lit³request„´³tupleµ´³named³req´³refµ„³ HttpRequest„„´³named³res´³embedded´³refµ„³ HttpResponse„„„„„„³ HttpRequest´³rec´³lit³ http-request„´³tupleµ´³named³sequenceNumber´³atom³ SignedInteger„„´³named³host´³refµ„³ RequestHost„„´³named³port´³atom³ SignedInteger„„´³named³method´³atom³Symbol„„´³named³path´³seqof´³atom³String„„„´³named³headers´³refµ„³Headers„„´³named³query´³dictof´³atom³Symbol„´³seqof´³refµ„³
QueryValue„„„„´³named³body´³refµ„³ RequestBody„„„„„³ HttpService´³rec´³lit³ http-service„´³tupleµ´³named³host´³refµ„³ HostPattern„„´³named³port´³atom³ SignedInteger„„´³named³method´³refµ„³ MethodPattern„„´³named³path´³refµ„³ PathPattern„„„„„³ PathPattern´³seqof´³refµ„³PathPatternElement„„³ RequestBody´³orµµ±absent´³lit€„„µ±present´³atom³
ByteString„„„„³ RequestHost´³orµµ±absent´³lit€„„µ±present´³atom³String„„„„³ HttpListener´³rec´³lit³ http-listener„´³tupleµ´³named³port´³atom³ SignedInteger„„„„„³ HttpResponse´³orµµ±status´³rec´³lit³status„´³tupleµ´³named³code´³atom³ SignedInteger„„´³named³message´³atom³String„„„„„„µ±header´³rec´³lit³header„´³tupleµ´³named³name´³atom³Symbol„„´³named³value´³atom³String„„„„„„µ±chunk´³rec´³lit³chunk„´³tupleµ´³named³chunk´³refµ„³Chunk„„„„„„µ±done´³rec´³lit³done„´³tupleµ´³named³chunk´³refµ„³Chunk„„„„„„„„³ MethodPattern´³orµµ±any´³lit€„„µ±specific´³atom³Symbol„„„„³PathPatternElement´³orµµ±label´³atom³String„„µ±wildcard´³lit³_„„µ±rest´³lit³...„„„„„³ embeddedType€„„µ³noise„´³schema·³version°³ definitions·³Packet´³orµµ±complete´³atom³
ByteString„„µ±
fragmented´³seqof´³atom³
ByteString„„„„„³ Initiator´³rec´³lit³ initiator„´³tupleµ´³named³initiatorSession´³embedded´³refµ„³Packet„„„„„„³ NoiseSpec´³andµ´³dict·³key´³named³key´³atom³
ByteString„„³service´³named³service´³refµ„³ServiceSelector„„„„´³named³protocol´³refµ„³ NoiseProtocol„„´³named³ preSharedKeys´³refµ„³NoisePreSharedKeys„„„„³ SessionItem´³orµµ± Initiator´³refµ„³ Initiator„„µ±Packet´³refµ„³Packet„„„„³ NoiseProtocol´³orµµ±present´³dict·³protocol´³named³protocol´³atom³String„„„„„µ±invalid´³dict·³protocol´³named³protocol³any„„„„µ±absent´³dict·„„„„„³ NoiseStepType´³lit³noise„³SecretKeyField´³orµµ±present´³dict·³ secretKey´³named³ secretKey´³atom³
ByteString„„„„„µ±invalid´³dict·³ secretKey´³named³ secretKey³any„„„„µ±absent´³dict·„„„„„³DefaultProtocol´³lit±!Noise_NK_25519_ChaChaPoly_BLAKE2s„³NoiseStepDetail´³refµ„³ServiceSelector„³ServiceSelector³any³NoiseServiceSpec´³andµ´³named³base´³refµ„³ NoiseSpec„„´³named³ secretKey´³refµ„³SecretKeyField„„„„³NoisePreSharedKeys´³orµµ±present´³dict·³ preSharedKeys´³named³ preSharedKeys´³seqof´³atom³
ByteString„„„„„„µ±invalid´³dict·³ preSharedKeys´³named³ preSharedKeys³any„„„„µ±absent´³dict·„„„„„³NoisePathStepDetail´³refµ„³ NoiseSpec„³NoiseDescriptionDetail´³refµ„³NoiseServiceSpec„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³timer„´³schema·³version°³ definitions·³SetTimer´³rec´³lit³ set-timer„´³tupleµ´³named³label³any„´³named³seconds´³atom³Double„„´³named³kind´³refµ„³ TimerKind„„„„„³ LaterThan´³rec´³lit³
later-than„´³tupleµ´³named³seconds´³atom³Double„„„„„³ TimerKind´³orµµ±relative´³lit³relative„„µ±absolute´³lit³absolute„„µ±clear´³lit³clear„„„„³ TimerExpired´³rec´³lit³ timer-expired„´³tupleµ´³named³label³any„´³named³seconds´³atom³Double„„„„„„³ embeddedType€„„µ³trace„´³schema·³version°³ definitions·³Oid³any³Name´³orµµ± anonymous´³rec´³lit³ anonymous„´³tupleµ„„„„µ±named´³rec´³lit³named„´³tupleµ´³named³name³any„„„„„„„³Target´³rec´³lit³entity„´³tupleµ´³named³actor´³refµ„³ActorId„„´³named³facet´³refµ„³FacetId„„´³named³oid´³refµ„³Oid„„„„„³TaskId³any³TurnId³any³ActorId³any³FacetId³any³ TurnCause´³orµµ±turn´³rec´³lit³ caused-by„´³tupleµ´³named³id´³refµ„³TurnId„„„„„„µ±cleanup´³rec´³lit³cleanup„´³tupleµ„„„„µ±linkedTaskRelease´³rec´³lit³linked-task-release„´³tupleµ´³named³id´³refµ„³TaskId„„´³named³reason´³refµ„³LinkedTaskReleaseReason„„„„„„µ±periodicActivation´³rec´³lit³periodic-activation„´³tupleµ´³named³period´³atom³Double„„„„„„µ±delay´³rec´³lit³delay„´³tupleµ´³named³ causingTurn´³refµ„³TurnId„„´³named³amount´³atom³Double„„„„„„µ±external´³rec´³lit³external„´³tupleµ´³named³ description³any„„„„„„„³ TurnEvent´³orµµ±assert´³rec´³lit³assert„´³tupleµ´³named³ assertion´³refµ„³AssertionDescription„„´³named³handle´³refµ³protocol„³Handle„„„„„„µ±retract´³rec´³lit³retract„´³tupleµ´³named³handle´³refµ³protocol„³Handle„„„„„„µ±message´³rec´³lit³message„´³tupleµ´³named³body´³refµ„³AssertionDescription„„„„„„µ±sync´³rec´³lit³sync„´³tupleµ´³named³peer´³refµ„³Target„„„„„„µ± breakLink´³rec´³lit³
break-link„´³tupleµ´³named³source´³refµ„³ActorId„„´³named³handle´³refµ³protocol„³Handle„„„„„„„„³
ExitStatus´³orµµ±ok´³lit³ok„„µ±Error´³refµ³protocol„³Error„„„„³
TraceEntry´³rec´³lit³trace„´³tupleµ´³named³ timestamp´³atom³Double„„´³named³actor´³refµ„³ActorId„„´³named³item´³refµ„³ActorActivation„„„„„³ActorActivation´³orµµ±start´³rec´³lit³start„´³tupleµ´³named³ actorName´³refµ„³Name„„„„„„µ±turn´³refµ„³TurnDescription„„µ±stop´³rec´³lit³stop„´³tupleµ´³named³status´³refµ„³
ExitStatus„„„„„„„„³FacetStopReason´³orµµ±explicitAction´³lit³explicit-action„„µ±inert´³lit³inert„„µ±parentStopping´³lit³parent-stopping„„µ± actorStopping´³lit³actor-stopping„„„„³TurnDescription´³rec´³lit³turn„´³tupleµ´³named³id´³refµ„³TurnId„„´³named³cause´³refµ„³ TurnCause„„´³named³actions´³seqof´³refµ„³ActionDescription„„„„„„³ActionDescription´³orµµ±dequeue´³rec´³lit³dequeue„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±enqueue´³rec´³lit³enqueue„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±dequeueInternal´³rec´³lit³dequeue-internal„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±enqueueInternal´³rec´³lit³enqueue-internal„´³tupleµ´³named³event´³refµ„³TargetedTurnEvent„„„„„„µ±spawn´³rec´³lit³spawn„´³tupleµ´³named³link´³atom³Boolean„„´³named³id´³refµ„³ActorId„„„„„„µ±link´³rec´³lit³link„´³tupleµ´³named³ parentActor´³refµ„³ActorId„„´³named³ childToParent´³refµ³protocol„³Handle„„´³named³
childActor´³refµ„³ActorId„„´³named³ parentToChild´³refµ³protocol„³Handle„„„„„„µ±
facetStart´³rec´³lit³ facet-start„´³tupleµ´³named³path´³seqof´³refµ„³FacetId„„„„„„„µ± facetStop´³rec´³lit³
facet-stop„´³tupleµ´³named³path´³seqof´³refµ„³FacetId„„„´³named³reason´³refµ„³FacetStopReason„„„„„„µ±linkedTaskStart´³rec´³lit³linked-task-start„´³tupleµ´³named³taskName´³refµ„³Name„„´³named³id´³refµ„³TaskId„„„„„„„„³TargetedTurnEvent´³rec´³lit³event„´³tupleµ´³named³target´³refµ„³Target„„´³named³detail´³refµ„³ TurnEvent„„„„„³AssertionDescription´³orµµ±value´³rec´³lit³value„´³tupleµ´³named³value³any„„„„„µ±opaque´³rec´³lit³opaque„´³tupleµ´³named³ description³any„„„„„„„³LinkedTaskReleaseReason´³orµµ± cancelled´³lit³ cancelled„„µ±normal´³lit³normal„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³stdenv„´³schema·³version°³ definitions·³ StandardRoute´³orµµ±standard´³ tuplePrefixµ´³named³
transports´³seqof´³refµ„³StandardTransport„„„´³named³key´³atom³
ByteString„„´³named³service³any„´³named³sig´³atom³
ByteString„„´³named³oid³any„„´³named³caveats´³seqof´³refµ³sturdy„³Caveat„„„„„µ±general´³refµ³
gatekeeper„³Route„„„„³StandardTransport´³orµµ±wsUrl´³atom³String„„µ±other³any„„„„³ embeddedType€„„µ³stream„´³schema·³version°³ definitions·³Mode´³orµµ±bytes´³lit³bytes„„µ±lines´³refµ„³LineMode„„µ±packet´³rec´³lit³packet„´³tupleµ´³named³size´³atom³ SignedInteger„„„„„„µ±object´³rec´³lit³object„´³tupleµ´³named³ description³any„„„„„„„³Sink´³orµµ±source´³rec´³lit³source„´³tupleµ´³named³
controller´³embedded´³refµ„³Source„„„„„„„µ± StreamError´³refµ„³ StreamError„„µ±data´³rec´³lit³data„´³tupleµ´³named³payload³any„´³named³mode´³refµ„³Mode„„„„„„µ±eof´³rec´³lit³eof„´³tupleµ„„„„„„³Source´³orµµ±sink´³rec´³lit³sink„´³tupleµ´³named³
controller´³embedded´³refµ„³Sink„„„„„„„µ± StreamError´³refµ„³ StreamError„„µ±credit´³rec´³lit³credit„´³tupleµ´³named³amount´³refµ„³ CreditAmount„„´³named³mode´³refµ„³Mode„„„„„„„„³LineMode´³orµµ±lf´³lit³lf„„µ±crlf´³lit³crlf„„„„³ StreamError´³rec´³lit³error„´³tupleµ´³named³message´³atom³String„„„„„³ CreditAmount´³orµµ±count´³atom³ SignedInteger„„µ± unbounded´³lit³ unbounded„„„„³StreamConnection´³rec´³lit³stream-connection„´³tupleµ´³named³source´³embedded´³refµ„³Source„„„´³named³sink´³embedded´³refµ„³Sink„„„´³named³spec³any„„„„³StreamListenerError´³rec´³lit³stream-listener-error„´³tupleµ´³named³spec³any„´³named³message´³atom³String„„„„„³StreamListenerReady´³rec´³lit³stream-listener-ready„´³tupleµ´³named³spec³any„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³sturdy„´³schema·³version°³ definitions·³Lit´³rec´³lit³lit„´³tupleµ´³named³value³any„„„„³Oid´³atom³ SignedInteger„³Alts´³rec´³lit³or„´³tupleµ´³named³ alternatives´³seqof´³refµ„³Rewrite„„„„„„³PAnd´³rec´³lit³and„´³tupleµ´³named³patterns´³seqof´³refµ„³Pattern„„„„„„³PNot´³rec´³lit³not„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³TRef´³rec´³lit³ref„´³tupleµ´³named³binding´³atom³ SignedInteger„„„„„³PAtom´³orµµ±Boolean´³lit³Boolean„„µ±Double´³lit³Double„„µ± SignedInteger´³lit³ SignedInteger„„µ±String´³lit³String„„µ±
ByteString´³lit³
ByteString„„µ±Symbol´³lit³Symbol„„„„³PBind´³rec´³lit³bind„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³Caveat´³orµµ±Rewrite´³refµ„³Rewrite„„µ±Alts´³refµ„³Alts„„µ±Reject´³refµ„³Reject„„µ±unknown³any„„„³Reject´³rec´³lit³reject„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„³Pattern´³orµµ±PDiscard´³refµ„³PDiscard„„µ±PAtom´³refµ„³PAtom„„µ± PEmbedded´³refµ„³ PEmbedded„„µ±PBind´³refµ„³PBind„„µ±PAnd´³refµ„³PAnd„„µ±PNot´³refµ„³PNot„„µ±Lit´³refµ„³Lit„„µ± PCompound´³refµ„³ PCompound„„„„³Rewrite´³rec´³lit³rewrite„´³tupleµ´³named³pattern´³refµ„³Pattern„„´³named³template´³refµ„³Template„„„„„³WireRef´³orµµ±mine´³tupleµ´³lit°„´³named³oid´³refµ„³Oid„„„„„µ±yours´³ tuplePrefixµ´³lit°„´³named³oid´³refµ„³Oid„„„´³named³ attenuation´³seqof´³refµ„³Caveat„„„„„„„³PDiscard´³rec´³lit³_„´³tupleµ„„„³Template´³orµµ±
TAttenuate´³refµ„³
TAttenuate„„µ±TRef´³refµ„³TRef„„µ±Lit´³refµ„³Lit„„µ± TCompound´³refµ„³ TCompound„„„„³ PCompound´³orµµ±rec´³rec´³lit³rec„´³tupleµ´³named³label³any„´³named³fields´³seqof´³refµ„³Pattern„„„„„„„µ±arr´³rec´³lit³arr„´³tupleµ´³named³items´³seqof´³refµ„³Pattern„„„„„„„µ±dict´³rec´³lit³dict„´³tupleµ´³named³entries´³dictof³any´³refµ„³Pattern„„„„„„„„„³ PEmbedded´³lit³Embedded„³ SturdyRef´³rec´³lit³ref„´³tupleµ´³named³
parameters´³refµ„³
Parameters„„„„„³ TCompound´³orµµ±rec´³rec´³lit³rec„´³tupleµ´³named³label³any„´³named³fields´³seqof´³refµ„³Template„„„„„„„µ±arr´³rec´³lit³arr„´³tupleµ´³named³items´³seqof´³refµ„³Template„„„„„„„µ±dict´³rec´³lit³dict„´³tupleµ´³named³entries´³dictof³any´³refµ„³Template„„„„„„„„„³
Parameters´³andµ´³dict·³oid´³named³oid³any„³sig´³named³sig´³atom³
ByteString„„„„´³named³caveats´³refµ„³ CaveatsField„„„„³
TAttenuate´³rec´³lit³ attenuate„´³tupleµ´³named³template´³refµ„³Template„„´³named³ attenuation´³seqof´³refµ„³Caveat„„„„„„³ CaveatsField´³orµµ±present´³dict·³caveats´³named³caveats´³seqof´³refµ„³Caveat„„„„„„µ±invalid´³dict·³caveats´³named³caveats³any„„„„µ±absent´³dict·„„„„„³SturdyStepType´³lit³ref„³SturdyStepDetail´³refµ„³
Parameters„³SturdyPathStepDetail´³refµ„³
Parameters„³SturdyDescriptionDetail´³dict·³key´³named³key´³atom³
ByteString„„³oid´³named³oid³any„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³worker„´³schema·³version°³ definitions·³Instance´³rec´³lit³Instance„´³tupleµ´³named³name´³atom³String„„´³named³argument³any„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³service„´³schema·³version°³ definitions·³State´³orµµ±started´³lit³started„„µ±ready´³lit³ready„„µ±failed´³lit³failed„„µ±complete´³lit³complete„„µ± userDefined³any„„„³
RunService´³rec´³lit³ run-service„´³tupleµ´³named³ serviceName³any„„„„³ ServiceState´³rec´³lit³ service-state„´³tupleµ´³named³ serviceName³any„´³named³state´³refµ„³State„„„„„³ ServiceObject´³rec´³lit³service-object„´³tupleµ´³named³ serviceName³any„´³named³object³any„„„„³RequireService´³rec´³lit³require-service„´³tupleµ´³named³ serviceName³any„„„„³RestartService´³rec´³lit³restart-service„´³tupleµ´³named³ serviceName³any„„„„³ServiceDependency´³rec´³lit³
depends-on„´³tupleµ´³named³depender³any„´³named³dependee´³refµ„³ ServiceState„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³protocol„´³schema·³version°³ definitions·³Nop´³lit€„³Oid´³atom³ SignedInteger„³Sync´³rec´³lit³S„´³tupleµ´³named³peer´³embedded´³lit<69>„„„„„„³Turn´³seqof´³refµ„³ TurnEvent„„³Error´³rec´³lit³error„´³tupleµ´³named³message´³atom³String„„´³named³detail³any„„„„³Event´³orµµ±Assert´³refµ„³Assert„„µ±Retract´³refµ„³Retract„„µ±Message´³refµ„³Message„„µ±Sync´³refµ„³Sync„„„„³Assert´³rec´³lit³A„´³tupleµ´³named³ assertion´³refµ„³ Assertion„„´³named³handle´³refµ„³Handle„„„„„³Handle´³atom³ SignedInteger„³Packet´³orµµ±Turn´³refµ„³Turn„„µ±Error´³refµ„³Error„„µ± Extension´³refµ„³ Extension„„µ±Nop´³refµ„³Nop„„„„³Message´³rec´³lit³M„´³tupleµ´³named³body´³refµ„³ Assertion„„„„„³Retract´³rec´³lit³R„´³tupleµ´³named³handle´³refµ„³Handle„„„„„³ Assertion³any³ Extension´³rec´³named³label³any„´³named³fields´³seqof³any„„„³ TurnEvent´³tupleµ´³named³oid´³refµ„³Oid„„´³named³event´³refµ„³Event„„„„„³ embeddedType€„„µ³ dataspace„´³schema·³version°³ definitions·³Observe´³rec´³lit³Observe„´³tupleµ´³named³pattern´³refµ³dataspacePatterns„³Pattern„„´³named³observer´³embedded³any„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³
gatekeeper„´³schema·³version°³ definitions·³Bind´³rec´³lit³bind„´³tupleµ´³named³ description´³refµ„³ Description„„´³named³target´³embedded³any„„´³named³observer´³refµ„³ BindObserver„„„„„³Step´³rec´³named³stepType´³atom³Symbol„„´³tupleµ´³named³detail³any„„„„³Bound´³orµµ±bound´³rec´³lit³bound„´³tupleµ´³named³pathStep´³refµ„³PathStep„„„„„„µ±Rejected´³refµ„³Rejected„„„„³Route´³rec´³lit³route„´³ tuplePrefixµ´³named³
transports´³seqof³any„„„´³named³ pathSteps´³seqof´³refµ„³PathStep„„„„„³Resolve´³rec´³lit³resolve„´³tupleµ´³named³step´³refµ„³Step„„´³named³observer´³embedded´³refµ„³Resolved„„„„„„³PathStep´³rec´³named³stepType´³atom³Symbol„„´³tupleµ´³named³detail³any„„„„³Rejected´³rec´³lit³rejected„´³tupleµ´³named³detail³any„„„„³Resolved´³orµµ±accepted´³rec´³lit³accepted„´³tupleµ´³named³responderSession´³embedded³any„„„„„„µ±Rejected´³refµ„³Rejected„„„„³ Description´³rec´³named³stepType´³atom³Symbol„„´³tupleµ´³named³detail³any„„„„³ ResolvePath´³rec´³lit³ resolve-path„´³tupleµ´³named³route´³refµ„³Route„„´³named³addr³any„´³named³control´³embedded´³refµ„³TransportControl„„„´³named³resolved´³refµ„³Resolved„„„„„³ BindObserver´³orµµ±present´³embedded´³refµ„³Bound„„„µ±absent´³lit€„„„„³ForceDisconnect´³rec´³lit³force-disconnect„´³tupleµ„„„³ResolvedPathStep´³rec´³lit³ path-step„´³tupleµ´³named³origin´³embedded´³refµ„³Resolve„„„´³named³pathStep´³refµ„³PathStep„„´³named³resolved´³refµ„³Resolved„„„„„³TransportControl´³refµ„³ForceDisconnect„³TransportConnection´³rec´³lit³connect-transport„´³tupleµ´³named³addr³any„´³named³control´³embedded´³refµ„³TransportControl„„„´³named³resolved´³refµ„³Resolved„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„µ³transportAddress„´³schema·³version°³ definitions·³Tcp´³rec´³lit³tcp„´³tupleµ´³named³host´³atom³String„„´³named³port´³atom³ SignedInteger„„„„„³Unix´³rec´³lit³unix„´³tupleµ´³named³path´³atom³String„„„„„³Stdio´³rec´³lit³stdio„´³tupleµ„„„³ WebSocket´³rec´³lit³ws„´³tupleµ´³named³url´³atom³String„„„„„„³ embeddedType€„„µ³dataspacePatterns„´³schema·³version°³ definitions·³AnyAtom´³orµµ±bool´³atom³Boolean„„µ±double´³atom³Double„„µ±int´³atom³ SignedInteger„„µ±string´³atom³String„„µ±bytes´³atom³
ByteString„„µ±symbol´³atom³Symbol„„µ±embedded´³embedded³any„„„„³Pattern´³orµµ±discard´³rec´³lit³_„´³tupleµ„„„„µ±bind´³rec´³lit³bind„´³tupleµ´³named³pattern´³refµ„³Pattern„„„„„„µ±lit´³rec´³lit³lit„´³tupleµ´³named³value´³refµ„³AnyAtom„„„„„„µ±group´³rec´³lit³group„´³tupleµ´³named³type´³refµ„³ GroupType„„´³named³entries´³dictof³any´³refµ„³Pattern„„„„„„„„„³ GroupType´³orµµ±rec´³rec´³lit³rec„´³tupleµ´³named³label³any„„„„„µ±arr´³rec´³lit³arr„´³tupleµ„„„„µ±dict´³rec´³lit³dict„´³tupleµ„„„„„„„³ embeddedType´³refµ³ EntityRef„³Cap„„„„„

View File

@ -0,0 +1,4 @@
version 1 .
embeddedType EntityRef.Cap .
Observe = <Observe @pattern dataspacePatterns.Pattern @observer #:any>.

Some files were not shown because too many files have changed in this diff Show More