Introduction
ZisK is a high-performance zkVM (Zero-Knowledge Virtual Machine) designed to generate zero-knowledge proofs of arbitrary program execution. It enables developers to prove the correctness of a computation without revealing its internal state, making ZisK a powerful tool for privacy-preserving and verifiable computation.
Proving systems traditionally involve complex cryptographic operations that require deep expertise and significant computational resources. ZisK abstracts these complexities by providing an optimized toolstack that minimizes computational overhead, making ZK technology accessible to a broader range of developers. With Rust-based execution and planned multi-language support, ZisK is designed to be developer-friendly while maintaining high performance and robust security.
Why ZisK?
- High-performance architecture optimized for low-latency proof generation.
- Rust-based zkVM, with future support for additional languages.
- No recompilation required across different programs.
- Standardized prover interface (JSON-RPC, GRPC, CLI).
- Flexible integration: usable as a standalone service or as a library.
- Decentralized architecture for trustless proof generation.
- Optimized proof generation costs for real-world applications.
- Fully open-source and backed by Polygon zkEVM and Plonky3 technology.
Installation Guide
ZisK can be installed from prebuilt binaries (recommended) or by building the ZisK tools, toolchain and setup files from source.
System Requirements
ZisK currently supports Linux x86_64 and macOS platforms (see note below).
Note: On macOS, proof generation is not yet optimized, so some proofs may take longer to generate.
Required Tools
Ensure the following tools are installed:
Installing Dependencies
Ubuntu
Ubuntu 22.04 or higher is required.
Install all required dependencies with:
sudo apt-get install -y xz-utils jq curl build-essential qemu-system libomp-dev libgmp-dev nlohmann-json3-dev protobuf-compiler uuid-dev libgrpc++-dev libsecp256k1-dev libsodium-dev libpqxx-dev nasm libopenmpi-dev openmpi-bin openmpi-common libclang-dev clang gcc-riscv64-unknown-elf
ZisK uses shared memory to exchange data between processes. The system must be configured to allow enough locked memory per process:
$ ulimit -l
unlimited
A way to achieve it is to edit the file /etc/systemd/system.conf and add the line DefaultLimitMEMLOCK=infinity. Reboot for changes to take effect.
macOS
macOS 14 or higher is required.
You must have Homebrew and Xcode installed.
Install all required dependencies with:
brew reinstall jq curl libomp protobuf openssl nasm pkgconf open-mpi libffi nlohmann-json libsodium riscv-tools
Installing ZisK
Option 1: Prebuilt Binaries (Recommended)
-
To install ZisK using ziskup, run the following command in your terminal:
curl https://raw.githubusercontent.com/0xPolygonHermez/zisk/main/ziskup/install.sh | bash -
During the installation, you will be prompted to select a setup option. You can choose from the following:
- Install proving key (default) – Required for generating and verifying proofs.
- Install proving key (no constant tree files) – Install proving key but without constant tree files generation.
- Install verify key – Needed only if you want to verify proofs.
- None – Choose this if you only want to compile programs and execute them using the ZisK emulator.
-
Verify the Rust toolchain: (which includes support for the
riscv64ima-zisk-zkvmcompilation target):rustup toolchain listThe output should include an entry for
zisk, similar to this:stable-x86_64-unknown-linux-gnu (default) nightly-x86_64-unknown-linux-gnu zisk -
Verify the
cargo-ziskCLI tool:cargo-zisk --version
Updating ZisK
To update ZisK to the latest version, simply run:
bash ziskup
You can use the flags --provingkey, --verifykey or --nokey to specify the installation setup and skip the selection prompt.
To install the PLONK proving key (provingKeySnark), run:
bash ziskup setup_snark
Option 2: Building from Source
Build ZisK
-
Clone the ZisK repository:
git clone https://github.com/0xPolygonHermez/zisk.git cd zisk -
Build ZisK tools:
cargo build --releaseNote: If you encounter the following error during compilation on Ubuntu:
--- stderr /usr/lib/x86_64-linux-gnu/openmpi/include/mpi.h:237:10: fatal error: 'stddef.h' file not foundFollow these steps to resolve it:
- Locate the
stddef.hfile:find /usr -name "stddef.h" - Set the environment variables to include the directory where
stddef.his located (e.g.):export C_INCLUDE_PATH=/usr/lib/gcc/x86_64-linux-gnu/13/include export CPLUS_INCLUDE_PATH=$C_INCLUDE_PATH - Try building again
- Locate the
-
Copy the tools to
~/.zisk/bindirectory:mkdir -p $HOME/.zisk/bin cp target/release/cargo-zisk target/release/ziskemu target/release/riscv2zisk target/release/zisk-coordinator target/release/zisk-worker target/release/libziskclib.a $HOME/.zisk/bin -
Copy required files for assembly rom setup:
Note: This is only needed on Linux x86_64, since assembly execution is not supported on macOS
mkdir -p $HOME/.zisk/zisk/emulator-asm cp -r ./emulator-asm/src $HOME/.zisk/zisk/emulator-asm cp ./emulator-asm/Makefile $HOME/.zisk/zisk/emulator-asm cp -r ./lib-c $HOME/.zisk/zisk -
Add
~/.zisk/binto your system PATH:If you are using
bashorzsh:PROFILE=$([[ "$(uname)" == "Darwin" ]] && echo ".zshenv" || echo ".bashrc") echo >>$HOME/$PROFILE && echo "export PATH=\"\$PATH:$HOME/.zisk/bin\"" >> $HOME/$PROFILE source $HOME/$PROFILE -
Install the ZisK Rust toolchain:
cargo-zisk sdk install-toolchainNote: This command installs the ZisK Rust toolchain from prebuilt binaries. If you prefer to build the toolchain from source, follow these steps:
-
Ensure all dependencies required to build the Rust toolchain from source are installed.
-
Build and install the Rust ZisK toolchain:
cargo-zisk sdk build-toolchain -
-
Verify the installation:
rustup toolchain listConfirm that
ziskappears in the list of installed toolchains.
Build Setup
Please note that the process can be long, taking approximately 45-60 minutes depending on the machine used.
NodeJS version 20.x or higher is required to build the setup files.
-
Clone the following repositories in the parent folder of the
ziskfolder created in the previous section:git clone https://github.com/0xPolygonHermez/pil2-compiler.git git clone https://github.com/0xPolygonHermez/pil2-proofman.git git clone https://github.com/0xPolygonHermez/pil2-proofman-js -
Install packages:
(cd pil2-compiler && npm i) (cd pil2-proofman-js && npm i) -
All subsequent commands must be executed from the
ziskfolder created in the previous section:cd zisk -
Generate fixed data:
cargo run --release --bin arith_frops_fixed_gen cargo run --release --bin binary_basic_frops_fixed_gen cargo run --release --bin binary_extension_frops_fixed_gen -
Compile ZisK PIL:
node --max-old-space-size=16384 ../pil2-compiler/src/pil.js pil/zisk.pil -I pil,../pil2-proofman/pil2-components/lib/std/pil,state-machines,precompiles -o pil/zisk.pilout -u tmp/fixed -O fixed-to-fileThis command will create the
pil/zisk.piloutfile -
Generate setup data: (this step may take 30-45 minutes):
node --max-old-space-size=16384 --stack-size=8192 ../pil2-proofman-js/src/main_setup.js -a ./pil/zisk.pilout -b build -t ../pil2-proofman/pil2-components/lib/std/pil -u tmp/fixed -r -s ./state-machines/starkstructs.jsonThis command generates the
build/provingKeydirectory.Additionally, to generate the snark wrapper:
node ../pil2-proofman-js/src/main_setup_snark.js -b build -t ../pil2-proofman/pil2-components/lib/std/pil -f -w ../powersOfTau28_hez_final_27.ptau -p ./state-machines/publics.json -n plonkIt is stored under the
build/provingKeySnarkdirectory. -
Copy (or move) the
build/provingKeydirectory to$HOME/.ziskdirectory:cp -R build/provingKey $HOME/.zisk
Uninstall Zisk
-
Uninstall ZisK toolchain:
rustup uninstall zisk -
Delete ZisK folder
rm -rf $HOME/.zisk
Quickstart
In this guide, you will learn how to install ZisK, create a simple program and run it using ZisK.
Installation
ZisK currently supports Linux x86_64 and macOS platforms (see note below).
Note: On macOS, proof generation is not yet optimized, so some proofs may take longer to generate.
Ubuntu 22.04 or higher is required.
macOS 14 or higher with Xcode installed is required.
-
Make sure you have Rust installed.
-
Install all required dependencies with:
- Ubuntu:
sudo apt-get install -y xz-utils jq curl build-essential qemu-system libomp-dev libgmp-dev nlohmann-json3-dev protobuf-compiler uuid-dev libgrpc++-dev libsecp256k1-dev libsodium-dev libpqxx-dev nasm libopenmpi-dev openmpi-bin openmpi-common libclang-dev clang gcc-riscv64-unknown-elf - macOS:
brew reinstall jq curl libomp protobuf openssl nasm pkgconf open-mpi libffi nlohmann-json libsodium
- Ubuntu:
-
To install ZisK using ziskup, run the following command in your terminal:
curl https://raw.githubusercontent.com/0xPolygonHermez/zisk/main/ziskup/install.sh | bash
Create a Project
The first step is to generate a new example project using the cargo-zisk sdk new <name> command. This command creates a new directory named <name> in your current directory. For example:
cargo-zisk sdk new sha_hasher
cd sha_hasher
This will create a project with the following structure:
.
├── build.rs
├── Cargo.toml
├── .gitignore
├── guest
| ├── src
| | └── main.rs
| └── Cargo.toml
└── host
├── src
| └── main.rs
├── bin
| ├── compressed.rs
| ├── execute.rs
| ├── prove.rs
| ├── plonk.rs
| ├── verify-constraints.rs
| └── ziskemu.rs
├── build.rs
└── Cargo.toml
The example program takes a number n as input and computes the SHA-256 hash n times.
Build
The next step is to build the program to generate an ELF file (RISC-V), which will be used later to generate the proof. Execute:
cargo build --release
This command builds the program using the zkvm target. The resulting sha_hasher ELF file (without extension) is generated in the ./target/elf/riscv64ima-zisk-zkvm-elf/release directory.
Execute
Before generating a proof, you can test the program using the ZisK emulator to ensure its correctness:
cargo run --release --bin ziskemu
The emulator will execute the program and display the public outputs:
public 0: 0x98211882
public 1: 0xbd13089b
public 2: 0x6ccf1fca
public 3: 0x81f7f0e4
public 4: 0xabf6352a
public 5: 0x0c39c9b1
public 6: 0x1f142cac
public 7: 0x233f1280
These outputs should match the native execution, confirming the program works correctly.
Verify Constraints
Once you've confirmed the program executes correctly, you can verify the constraints without generating a full proof. This is useful for debugging and ensuring correctness:
cargo run --release --bin verify-constraints
This command will:
- Execute the program using the ZisK emulator
- Generate the execution trace
- Verify all arithmetic and logical constraints
- Check that all state machine transitions are valid
If successful, you'll see:
✓ All constraints for Instance #0 of Main were verified
✓ All constraints for Instance #0 of Rom were verified
...
✓ All global constraints were successfully verified
Prove
To generate a cryptographic proof of execution, run:
cargo run --release --bin prove
This will:
- Execute the program and generate the execution trace
- Compute witness values for all state machines
- Generate the polynomial commitments
- Create the zk-STARK proof
The proof will be saved in the ./proof directory. This process may take several minutes depending on the program complexity.
Compressed Proof (Optional)
After generating the proof, you can optionally create a compressed version to reduce the proof size:
cargo run --release --bin compressed
This generates an additional compressed proof on top of the existing one using recursive composition. The compressed proof is significantly smaller while maintaining the same security guarantees.
Writing Programs
This document explains how to write or modify a Rust program for execution in ZisK.
Setup
Code changes
Writing a Rust program for ZisK is similar to writing a standard Rust program, with a few minor modifications. Follow these steps:
-
Modify
main.rsfile:Add the following code to mark the main function as the entry point for ZisK:
#![allow(unused)] #![no_main] fn main() { ziskos::entrypoint!(main); } -
Modify
Cargo.tomlfile:Add the
ziskoscrate as a dependency:[dependencies] ziskos = { git = "https://github.com/0xPolygonHermez/zisk.git" }
Let's show these changes using the example program from the Quickstart section.
Example program
main.rs:
// This example program takes a number `n` as input and computes the SHA-256 hash `n` times sequentially. // Mark the main function as the entry point for ZisK #![no_main] ziskos::entrypoint!(main); use sha2::{Digest, Sha256}; use std::convert::TryInto; use ziskos::{read_input_slice, set_output}; use byteorder::ByteOrder; fn main() { let n: u32 = ziskos::io::read(); let mut hash = [0u8; 32]; // Compute SHA-256 hashing 'n' times for _ in 0..n { let mut hasher = Sha256::new(); hasher.update(hash); let digest = &hasher.finalize(); hash = Into::<[u8; 32]>::into(*digest); } ziskos::io::commit(&output); }
Cargo.toml:
[package]
name = "guest"
version = "0.1.0"
edition = "2021"
[dependencies]
byteorder = "1.5.0"
sha2 = "0.10.8"
ziskos = { git = "https://github.com/0xPolygonHermez/zisk.git" }
Input/Output Data
To read input data in your ZisK program, use the ziskos::io::read() function, which deserializes data from the input:
#![allow(unused)] fn main() { // Read a u32 value from input let n: u32 = ziskos::io::read(); }
You can also read custom types that implement the Deserialize trait:
#![allow(unused)] fn main() { // Read a custom struct from input let my_data: MyStruct = ziskos::io::read(); }
To write public output data, use the ziskos::io::commit() function, which serializes and commits the output:
#![allow(unused)] fn main() { // Commit the hash as public output let hash: [u8; 32] = compute_hash(); ziskos::io::commit(&hash); }
The output can be any type that implements the Serialize trait. The data will be serialized and made available as public outputs that can be verified by anyone checking the proof.
Build
Before compiling your program for ZisK, you can test it on the native architecture just like any regular Rust program using the cargo command.
Once your program is ready to run on ZisK, compile it into an ELF file (RISC-V architecture), using the cargo-zisk CLI tool:
cargo-zisk build
This command compiles the program using the zisk target. The resulting guest ELF file (without extension) is generated in the ./target/riscv64ima-zisk-zkvm-elf/debug directory.
For production, compile the ELF file with the --release flag, similar to how you compile Rust projects:
cargo-zisk build --release
In this case, the guest ELF file will be generated in the ./target/elf/riscv64ima-zisk-zkvm-elf/release directory.
Execute
You can test your compiled program using the ZisK emulator (ziskemu) before generating a proof. Use the -e (--elf) flag to specify the location of the ELF file and the -i (--inputs) flag to specify the location of the input file:
cargo-zisk build --release
ziskemu -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin
If the program requires a large number of ZisK steps, you might encounter the following error:
Error during emulation: EmulationNoCompleted
Error: Error executing Run command
To resolve this, you can increase the number of execution steps using the -n (--max-steps) flag. For example:
ziskemu -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin -n 10000000000
Metrics and Statistics
Performance Metrics
You can get performance metrics related to the program execution in ZisK using the -m (--log-metrics) flag in the cargo-zisk run command or in ziskemu tool:
ziskemu -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin -m
The output will include details such as execution time, throughput, and clock cycles per step:
process_rom() steps=85309 duration=0.0009 tp=89.8565 Msteps/s freq=3051.0000 33.9542 clocks/step
...
Execution Statistics
You can get statistics related to the program execution in Zisk using the -X (--stats) flag in ziskemu tool:
ziskemu -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin -X
The output will include details such as cost definitions, total cost, register reads/writes, opcode statistics, etc:
Cost definitions:
AREA_PER_SEC: 1000000 steps
COST_MEMA_R1: 0.00002 sec
COST_MEMA_R2: 0.00004 sec
COST_MEMA_W1: 0.00004 sec
COST_MEMA_W2: 0.00008 sec
COST_USUAL: 0.000008 sec
COST_STEP: 0.00005 sec
Total Cost: 12.81 sec
Main Cost: 4.27 sec 85308 steps
Mem Cost: 2.22 sec 222052 steps
Mem Align: 0.05 sec 2701 steps
Opcodes: 6.24 sec 1270 steps (81182 ops)
Usual: 0.03 sec 4127 steps
Memory: 135563 a reads + 1625 na1 reads + 10 na2 reads + 84328 a writes + 524 na1 writes + 2 na2 writes = 137198 reads + 84854 writes = 222052 r/w
Opcodes:
flag: 0.00 sec (0 steps/op) (89 ops)
copyb: 0.00 sec (0 steps/op) (10568 ops)
add: 1.12 sec (77 steps/op) (14569 ops)
ltu: 0.01 sec (77 steps/op) (101 ops)
...
xor: 1.06 sec (77 steps/op) (13774 ops)
signextend_b: 0.03 sec (109 steps/op) (320 ops)
signextend_w: 0.03 sec (109 steps/op) (320 ops)
...
Prove
Program Setup
Before generating a proof (or verifying the constraints), you need to generate the program setup files. This must be done the first time after building the program ELF file, or any time it changes:
cargo-zisk rom-setup -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -k $HOME/.zisk/provingKey
In this command:
-e(--elf) specifies the ELF file location.-k(--proving-key) specifies the directory containing the proving key. This is optional and defaults to$HOME/.zisk/provingKey.
The program setup files will be generated in the cache directory located at $HOME/.zisk.
To clean the cache directory content, use the following command:
cargo-zisk clean
Verify Constraints
Before generating a proof (which can take some time), you can verify that all constraints are satisfied:
cargo-zisk verify-constraints -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin -k $HOME/.zisk/provingKey
In this command:
-e(--elf) specifies the ELF file location.-i(--input) specifies the input file location.-k(--proving-key) specifies the directory containing the proving key. This is optional and defaults to$HOME/.zisk/provingKey.
If everything is correct, you will see an output similar to:
[INFO ] GlCstVfy: --> Checking global constraints
[INFO ] CstrVrfy: ··· ✓ All global constraints were successfully verified
[INFO ] CstrVrfy: ··· ✓ All constraints were verified
Generate Proof
To generate a proof, run the following command:
cargo-zisk prove -e target/elf/riscv64ima-zisk-zkvm-elf/release/guest -i host/tmp/input.bin -k $HOME/.zisk/provingKey -o proof -a -y
In this command:
-e(--elf) specifies the ELF file location.-i(--input) specifies the input file location.-k(--proving-key) specifies the directory containing the proving key. This is optional and defaults to$HOME/.zisk/provingKey.-o(--output) determines the output directory (in this exampleproof).-a(--aggregation) indicates that a final aggregated proof (containing all generated sub-proofs) should be produced.-y(--verify-proofs) instructs the tool to verify the proof immediately after it is generated (verification can also be performed later using thecargo-zisk verifycommand).
If the process is successful, you should see a message similar to:
...
[INFO ] ProofMan: ✓ Vadcop Final proof was verified
[INFO ] stop <<< GENERATING_VADCOP_PROOF 91706ms
[INFO ] ProofMan: Proofs generated successfully
Concurrent Proof Generation
Zisk proofs can be generated using multiple processes concurrently to improve performance and scalability. The standard MPI (Message Passing Interface) approach is used to launch these processes, which can run either on the same server or across multiple servers.
To execute a Zisk proof using multiple processes, use the following command:
mpirun --bind-to none -np <num_processes> -x OMP_NUM_THREADS=<num_threads_per_process> -x RAYON_NUM_THREADS=<num_threads_per_process> target/release/cargo-zisk <zisk arguments>
In this command:
<num_processes>specifies the number of processes to launch.<num_threads_per_process>sets the number of threads used by each process via theOMP_NUM_THREADSandRAYON_NUM_THREADSenvironment variables.--bind-to noneprevents binding processes to specific cores, allowing the operating system to schedule them dynamically for better load balancing.
Running a Zisk proof with multiple processes enables efficient workload distribution across multiple servers. On a single server with many cores, splitting execution into smaller subsets of cores generally improves performance by increasing concurrency. As a general rule, <num_processes> * <num_threads_per_process> should match the number of available CPU cores or double that if hyperthreading is enabled.
The total memory requirement increases proportionally with the number of processes. If each process requires approximately 25GB of memory, running P processes will require roughly (25 * P)GB of memory. Ensure that the system has sufficient available memory to accommodate all running processes.
GPU Proof Generation
Zisk proofs can also be generated using GPUs to significantly improve performance and scalability. Follow these steps to enable GPU support:
-
GPU support is only available for NVIDIA GPUs.
-
Make sure the CUDA Toolkit is installed.
-
Build Zisk with GPU support enabled.
Note: It is recommended to compile Zisk directly on the server where it will be executed. The binary will be optimized for the local GPU architecture, which can lead to better runtime performance.
GPU support must be enabled at compile time. Follow the instructions in the Build ZisK section under Option 2: Building from source in the Installation guide, but replace the build command with:
cargo build --release --features gpu
You can combine GPU-based execution with concurrent proof generation using multiple processes, as described in the Concurrent Proof Generation section.
Note: GPU memory is typically more limited than CPU memory. When combining GPU execution with concurrent proof generation, ensure that each process has sufficient memory available on the GPU to avoid out-of-memory errors.
Verify Proof
To verify a generated proof, use the following command:
cargo-zisk verify -p ./proof/vadcop_final_proof.bin -k $HOME/.zisk/provingKey
In this command:
-p(--proof) specifies the final proof file generated with cargo-zisk prove.- The remaining flags specify the files required for verification; they are optional, set by default to the files found in the
$HOME/.ziskdirectory.
Precompiles
Precompiles are built-in system functions within ZisK’s operating system that accelerate computationally expensive and frequently used operations such as the Keccak-f permutation and Secp256k1 addition and doubling.
These precompiles improve proving efficiency by offloading intensive computations from ZisK programs to dedicated, pre-integrated sub-processors.
How Precompiles Work
Precompiles are primarily used to patch third-party crates, replacing costly operations with system calls. This ensures that commonly used cryptographic primitives like Keccak hashing and elliptic curve operations can be efficiently executed within ZisK programs.
Typically, precompiles are used to patch third-party crates that implement these operations and are then used as dependencies in the Zisk programs we write.
You can see here an example of the patched tiny-keccak crate.
Available Precompiles in ZisK
Below is a summary of the precompiles currently available in ZisK:
- syscall_add256: Addition over 256-bit non-negative integers.
- syscall_arith256: Multiplication followed by addition over 256-bit non-negative integers.
- syscall_arith256_mod: Modular multiplication followed by addition over 256-bit non-negative integers.
- syscall_arith384_mod: Modular multiplication followed by addition over 256-bit non-negative integers.
- syscall_keccak_f: Keccak-f 1600 permutation function from the Keccak cryptographic sponge construction.
- syscall_sha256_f: Extend and compress function of the SHA-256 cryptographic hash algorithm.
- syscall_syscall_poseidon2: Compression function of the Poseidon2 cryptographic hash algorithm.
- syscall_secp256k1_add: Elliptic curve point addition over the Secp256k1 curve.
- syscall_secp256k1_dbl: Elliptic curve point doubling over the Secp256k1 curve.
- syscall_secp256r1_add: Elliptic curve point addition over the Secp256r1 curve.
- syscall_secp256r1_dbl: Elliptic curve point doubling over the Secp256r1 curve.
- syscall_bn254_curve_add: Elliptic curve point addition over the Bn254 curve.
- syscall_bn254_curve_dbl: Elliptic curve point doubling over the Bn254 curve.
- syscall_bn254_complex_add: Complex addition within the quadratic extension built over the base field of the Bn254 curve.
- syscall_bn254_complex_sub: Complex subtraction within the quadratic extension built over the base field of the Bn254 curve.
- syscall_bn254_complex_mul: Complex multiplication within the quadratic extension built over the base field of the Bn254 curve.
- syscall_arith384_mod: Modular multiplication followed by addition over 384-bit non-negative integers.
- syscall_bls12_381_curve_add: Elliptic curve point addition over the BLS12-381 curve.
- syscall_bls12_381_curve_dbl: Elliptic curve point doubling over the BLS12-381 curve.
- syscall_bls12_381_complex_add: Complex addition within the quadratic extension built over the base field of the BLS12-381 curve.
- syscall_bls12_381_complex_sub: Complex subtraction within the quadratic extension built over the base field of the BLS12-381 curve.
- syscall_bls12_381_complex_mul: Complex multiplication within the quadratic extension built over the base field of the BLS12-381 curve.
Distributed Proving
Generating a ZisK proof can be computationally intensive, especially for large programs. The distributed proving system lets you split the workload across multiple machines, reducing proof generation time by parallelizing the work.
This chapter covers how to set up and run a distributed proving cluster, from launching a coordinator to connecting workers and submitting proof requests.
How It Works
A distributed proving cluster consists of two roles:
- A Coordinator that receives proof requests and orchestrates the work.
- One or more Workers that execute the actual proof computation.
When you submit a proof request, the process unfolds in three phases:
- Partial Contributions — The coordinator assigns segments of the work to available workers based on their compute capacity. Each worker computes its partial challenges independently.
- Prove — Workers compute the global challenge and generate their respective partial proofs.
- Aggregation — The first worker to finish is selected as the aggregator. It collects all partial proofs and produces the final proof.
The coordinator returns the final proof to the client once aggregation completes.
Workers report their compute capacity when they register. The coordinator selects workers sequentially from the available pool until the requested capacity is met. While assigned to a job, a worker is marked as busy and won't receive new tasks.
Getting Started
Building
From the project root, build both binaries:
cargo build --release --bin zisk-coordinator --bin zisk-worker
Running Locally
1. Start the coordinator:
cargo run --release --bin zisk-coordinator
2. Start a worker (in a separate terminal):
cargo run --release --bin zisk-worker -- --elf <elf-file-path> --inputs-folder <inputs-folder>
3. Submit a proof request (in a separate terminal):
cargo run --release --bin zisk-coordinator prove --inputs-uri <input-filename> --compute-capacity 10
The --compute-capacity flag specifies how many compute units the proof requires. The coordinator assigns workers until this capacity is covered.
Docker Deployment
For multi-machine setups, Docker simplifies deployment:
# Build the image (CPU-only)
docker build -t zisk-distributed:latest -f distributed/Dockerfile .
# For GPU support
docker build --build-arg GPU=true -t zisk-distributed:gpu -f distributed/Dockerfile .
# Create a network for container DNS resolution
docker network create zisk-net || true
Start the coordinator:
LOGS_DIR="<logs-folder>"
docker run -d --rm --name zisk-coordinator \
--network zisk-net \
-v "$LOGS_DIR:/var/log/distributed" \
-e RUST_LOG=info \
zisk-distributed:latest \
zisk-coordinator --config /app/config/coordinator/dev.toml
Start a worker:
LOGS_DIR="<logs-folder>"
PROVING_KEY_DIR="<provingKey-folder>"
ELF_DIR="<elf-folder>"
INPUTS_DIR="<inputs-folder>"
docker run -d --rm --name zisk-worker-1 \
--network zisk-net --shm-size=20g \
-v "$LOGS_DIR:/var/log/distributed" \
-v "$HOME/.zisk/cache:/app/.zisk/cache:ro" \
-v "$PROVING_KEY_DIR:/app/proving-keys:ro" \
-v "$ELF_DIR:/app/elf:ro" \
-v "$INPUTS_DIR:/app/inputs:ro" \
-e RUST_LOG=info \
zisk-distributed:latest zisk-worker --coordinator-url http://zisk-coordinator:50051 \
--elf /app/elf/zec.elf --proving-key /app/proving-keys --inputs-folder /app/inputs
Submit a proof:
docker exec -it zisk-coordinator \
zisk-coordinator prove --inputs-uri <input-filename> --compute-capacity 10
Note: Use the filename only when submitting proofs, not the full path. Workers resolve files relative to their
--inputs-folder.
Container paths reference:
| Path | Purpose |
|---|---|
/app/config/{coordinator,worker}/ | Configuration files |
/app/bin/ | Binaries |
/app/.zisk/cache/ | Cache (mount from host $HOME/.zisk/cache) |
/var/log/distributed/ | Log files |
Coordinator
The coordinator is responsible for managing the distributed proof generation process. It receives proof requests from clients and assigns work to available workers.
To start a coordinator instance with default settings:
cargo run --release --bin zisk-coordinator
Coordinator Configuration
The coordinator can be configured using either a TOML configuration file or command-line arguments.
If no configuration file is explicitly provided, the system falls back to the ZISK_COORDINATOR_CONFIG_PATH environment variable to locate one. If neither the CLI argument nor environment variable is set, built-in defaults are used.
Example:
# You can specify the configuration file path using a command line argument:
cargo run --release --bin zisk-coordinator -- --config /path/to/my-config.toml
# You can specify the configuration file path using an environment variable:
export ZISK_COORDINATOR_CONFIG_PATH="/path/to/my-config.toml"
cargo run --release --bin zisk-coordinator
The table below lists the available configuration options for the Coordinator:
| TOML Key | CLI Argument | Environment Variable | Type | Default | Description |
|---|---|---|---|---|---|
service.name | - | - | String | ZisK Distributed Coordinator | Service name |
service.environment | - | - | String | development | Service environment (development, staging, production) |
server.host | - | - | String | 0.0.0.0 | Server host |
server.port | --port | - | Number | 50051 | Server port |
server.proofs_dir | --proofs-dir | - | String | proofs | Directory to save generated proofs (conflicts with --no-save-proofs) |
| - | --no-save-proofs | - | Boolean | false | Disable saving proofs (conflicts with --proofs-dir) |
| - | -c, --compressed-proofs | - | Boolean | false | Generate compressed proofs |
server.shutdown_timeout_seconds | - | - | Number | 30 | Graceful shutdown timeout in seconds |
logging.level | - | RUST_LOG | String | debug | Logging level (error, warn, info, debug, trace) |
logging.format | - | - | String | pretty | Logging format (pretty, json, compact) |
logging.file_path | - | - | String | - | Optional. Log file path (enables file logging) |
coordinator.max_workers_per_job | - | - | Number | 10 | Maximum workers per proof job |
coordinator.max_total_workers | - | - | Number | 1000 | Maximum total registered workers |
coordinator.phase1_timeout_seconds | - | - | Number | 300 | Phase 1 timeout in seconds |
coordinator.phase2_timeout_seconds | - | - | Number | 600 | Phase 2 timeout in seconds |
coordinator.webhook_url | --webhook-url | - | String | - | Optional. Webhook URL to notify on job completion |
Configuration Files examples
Example development configuration file:
[service]
name = "ZisK Distributed Coordinator"
environment = "development"
[logging]
level = "debug"
format = "pretty"
Example production configuration file:
[service]
name = "ZisK Distributed Coordinator"
environment = "production"
[server]
host = "0.0.0.0"
port = 50051
proofs_dir = "proofs"
[logging]
level = "info"
format = "json"
file_path = "/var/log/distributed/coordinator.log"
[coordinator]
max_workers_per_job = 20 # Maximum workers per proof job
max_total_workers = 5000 # Maximum total registered workers
phase1_timeout_seconds = 600 # 10 minutes for phase 1
phase2_timeout_seconds = 1200 # 20 minutes for phase 2
webhook_url = "http://webhook.example.com/notify?job_id={$job_id}"
Webhook URL
The Coordinator can notify an external service when a job finishes by sending a request to a configured webhook URL. The placeholder {$job_id} can be included in the URL and will be replaced with the finished job’s ID. If no placeholder is provided, the Coordinator automatically appends /{job_id} to the end of the URL.
All webhook notifications are sent as JSON POST requests with the following structure:
{
"job_id": "job_12345",
"success": true,
"duration_ms": 45000,
"proof": <array of u64...>,
"timestamp": "2025-10-03T14:30:00Z",
"error": null
}
Fields Description
| Field | Type | Description |
|---|---|---|
job_id | string | Unique identifier for the proof generation job |
success | boolean | true if proof generation completed successfully, false if it failed |
duration_ms | number | Total execution time in milliseconds from job start to completion |
proof | array<u64> | null | Final proof data as array of integers (only present on success) |
timestamp | string | ISO 8601 timestamp when the notification was sent |
error | object | null | Error details (only present on failure) |
Error Object Structure
When success is false, the error field contains:
{
"code": "WORKER_FAILURE",
"message": "Worker node-003 failed during proof generation: Out of memory"
}
Successful Proof Generation Example:
{
"job_id": "job_abc123",
"success": true,
"duration_ms": 32500,
"proof": [1234567890, 9876543210, 1357924680, ...],
"timestamp": "2025-10-03T14:30:25Z",
"error": null
}
Failed Job Example:
{
"job_id": "job_def456",
"success": false,
"duration_ms": 15000,
"proof": null,
"timestamp": "2025-10-03T14:31:10Z",
"error": {
"code": "WORKER_ERROR",
"message": "Memory exhaustion during proof generation"
}
}
Webhook Implementation Guidelines
HTTP Requirements:
- Method: POST
- Content-Type:
application/json - Timeout: 10 seconds (configurable)
- Retry: Currently no automatic retries (implement idempotency)
Recommended Response:
Your webhook endpoint should respond with:
- Success: HTTP 200-299 status code
- Body: Any valid response (ignored by coordinator)
HTTP/1.1 200 OK
Content-Type: application/json
{"received": true, "job_id": "job_abc123"}
If your webhook endpoint is unavailable or returns an error:
- The coordinator logs the failure but continues operation
- No automatic retries are performed
- Consider implementing your own retry mechanism or message queue
Command Line Arguments
# Show help
cargo run --release --bin zisk-coordinator -- --help
# Run coordinator with custom port
cargo run --release --bin zisk-coordinator -- --port 50051
# Run with specific configuration
cargo run --release --bin zisk-coordinator -- --config production.toml
# Run with webhook URL
cargo run --release --bin zisk-coordinator -- --webhook-url http://webhook.example.com/notify --port 50051
Worker
The worker is responsible for executing proof generation tasks assigned by the coordinator. It registers with the coordinator, reports its compute capacity, and waits for tasks to be assigned.
To start a worker instance with default settings:
cargo run --release --bin zisk-worker -- --elf <elf-file-path> --inputs-folder <inputs-folder>
Worker Configuration
The worker can be configured using either a TOML configuration file or command-line arguments.
If no configuration file is explicitly provided, the system falls back to the ZISK_WORKER_CONFIG_PATH environment variable to locate one. If neither the CLI argument nor environment variable is set, built-in defaults are used.
Example:
# You can specify the configuration file path using a command line argument:
cargo run --release --bin zisk-worker -- --config /path/to/my-config.toml
# You can specify the configuration file path using an environment variable:
export ZISK_WORKER_CONFIG_PATH="/path/to/my-config.toml"
cargo run --release --bin zisk-worker
Input Files Handling
Workers need to know where to find input files for proof generation. The --inputs-folder parameter specifies the base directory where input files are stored:
- Default: Current working directory (
.) if not specified - Usage: When the coordinator sends a prove command with an input filename, the worker combines
--inputs-folder+filenameto locate the file - Benefits: Allows input files to be organized in a dedicated directory, separate from the worker executable
Example:
# Worker with inputs in specific folder
cargo run --release --bin zisk-worker -- --elf program.elf --inputs-folder /data/inputs/
# Coordinator requests proof for "input.bin" -> Worker looks for "/data/inputs/input.bin"
cargo run --release --bin zisk-coordinator -- prove --inputs-uri input.bin --compute-capacity 10
The table below lists the available configuration options for the Worker:
| TOML Key | CLI Argument | Environment Variable | Type | Default | Description |
|---|---|---|---|---|---|
worker.worker_id | --worker-id | - | String | Auto-generated UUID | Unique worker identifier |
worker.compute_capacity.compute_units | --compute-capacity | - | Number | 10 | Worker compute capacity (in compute units) |
worker.environment | - | - | String | development | Service environment (development, staging, production) |
worker.inputs_folder | --inputs-folder | - | String | . | Path to folder containing input files |
coordinator.url | --coordinator-url | - | String | http://127.0.0.1:50051 | Coordinator server URL |
connection.reconnect_interval_seconds | - | - | Number | 5 | Reconnection interval in seconds |
connection.heartbeat_timeout_seconds | - | - | Number | 30 | Heartbeat timeout in seconds |
logging.level | - | RUST_LOG | String | debug | Logging level (error, warn, info, debug, trace) |
logging.format | - | - | String | pretty | Logging format (pretty, json, compact) |
logging.file_path | - | - | String | - | Optional. Log file path (enables file logging) |
| - | --proving-key | - | String | ~/.zisk/provingKey | Path to setup folder |
| - | --elf | - | String | - | Path to ELF file |
| - | --asm | - | String | ~/.zisk/cache | Path to ASM file (mutually exclusive with --emulator) |
| - | --emulator | - | Boolean | false | Use prebuilt emulator (mutually exclusive with --asm) |
| - | --asm-port | - | Number | 23115 | Base port for Assembly microservices |
| - | --shared-tables | - | Boolean | false | Whether to share tables when worker is running in a cluster |
| - | -v, -vv, -vvv, ... | - | Number | 0 | Verbosity level (0=error, 1=warn, 2=info, 3=debug, 4=trace) |
| - | -d, --debug | - | String | - | Enable debug mode with optional component filter |
| - | --verify-constraints | - | Boolean | false | Whether to verify constraints |
| - | --unlock-mapped-memory | - | Boolean | false | Unlock memory map for the ROM file (mutually exclusive with --emulator) |
| - | --hints | - | Boolean | false | Enable precompile hints processing |
| - | -m, --minimal-memory | - | Boolean | false | Use minimal memory mode |
| - | -r, --rma | - | Boolean | false | Enable RMA mode |
| - | -z, --preallocate | - | Boolean | false | GPU preallocation flag |
| - | -t, --max-streams | - | Number | - | Maximum number of GPU streams |
| - | -n, --number-threads-witness | - | Number | - | Number of threads for witness computation |
| - | -x, --max-witness-stored | - | Number | - | Maximum number of witnesses to store in memory |
Configuration Files examples
Example development configuration file:
[worker]
compute_capacity.compute_units = 10
environment = "development"
[logging]
level = "debug"
format = "pretty"
Example production configuration file:
[worker]
worker_id = "my-worker-001"
compute_capacity.compute_units = 10
environment = "production"
inputs_folder = "/app/inputs"
[coordinator]
url = "http://127.0.0.1:50051"
[connection]
reconnect_interval_seconds = 5
heartbeat_timeout_seconds = 30
[logging]
level = "info"
format = "pretty"
file_path = "/var/log/distributed/worker-001.log"
Launching a Proof
To launch a proof generation request, use the prove subcommand of the zisk-coordinator binary. This sends an RPC request to a running coordinator instance.
cargo run --release --bin zisk-coordinator -- prove --inputs-uri <input_filename> --compute-capacity 10
The --compute-capacity flag indicates the total compute units required to generate a proof. The coordinator will assign one or more workers to meet this capacity, distributing the workload if multiple workers are needed. Requests exceeding the combined capacity of available workers will not be processed and an error will be returned.
Prove Subcommand Arguments
| CLI Argument | Short | Type | Default | Description |
|---|---|---|---|---|
--inputs-uri | - | String | - | Path to the input file for proof generation |
--compute-capacity | -c | Number | required | Total compute units required for the proof |
--coordinator-url | - | String | http://127.0.0.1:50051 | URL of the coordinator to send the request to |
--data-id | - | String | Auto (from filename or UUID) | Custom identifier for the proof job |
--hints-uri | - | String | - | Path/URI to the precompile hints source |
--stream-hints | - | Boolean | false | Stream hints from the coordinator to workers via gRPC (see Hints Stream) |
--direct-inputs | -x | Boolean | false | Send input data inline via gRPC instead of as a file path |
--minimal-compute-capacity | -m | Number | Same as --compute-capacity | Minimum acceptable compute capacity (allows partial worker allocation) |
--simulated-node | - | Number | - | Simulated node ID (for testing) |
Input and Hints Modes
The prove subcommand supports two modes for delivering inputs and hints to workers:
Input modes (controlled by --inputs-uri and --direct-inputs):
- Path mode (default): The coordinator sends the input file path to workers. Workers must have access to the file at the specified path.
- Data mode (
--direct-inputs): The coordinator reads the input file and sends its contents inline via gRPC. Workers do not need local access to the file.
Hints modes (controlled by --hints-uri and --stream-hints):
- Path mode (default): The coordinator sends the hints URI to workers. Each worker loads hints from the specified path independently.
- Streaming mode (
--stream-hints): The coordinator reads hints from the URI and broadcasts them to all workers in real-time via gRPC. See the Hints Stream documentation for details.
Examples:
# Basic proof with file path inputs
zisk-coordinator prove --inputs-uri /data/inputs/my_input.bin --compute-capacity 10
# Send input data directly (workers don't need local file access)
zisk-coordinator prove --inputs-uri /data/inputs/my_input.bin -x --compute-capacity 10
# With precompile hints in path mode (workers load hints locally)
zisk-coordinator prove --inputs-uri input.bin --hints-uri /data/hints/hints.bin --compute-capacity 10
# With precompile hints in streaming mode (coordinator broadcasts to workers)
zisk-coordinator prove --inputs-uri input.bin --hints-uri unix:///tmp/hints.sock --stream-hints --compute-capacity 10
Administrative Operations
Health Checks and Monitoring
The coordinator exposes administrative endpoints for monitoring:
# Basic health check
grpcurl -plaintext 127.0.0.1:50051 zisk.distributed.api.v1.ZiskDistributedApi/HealthCheck
# System status
grpcurl -plaintext 127.0.0.1:50051 zisk.distributed.api.v1.ZiskDistributedApi/SystemStatus
# List active jobs
grpcurl -plaintext -d '{"active_only": true}' \
127.0.0.1:50051 zisk.distributed.api.v1.ZiskDistributedApi/JobsList
# List connected workers
grpcurl -plaintext -d '{"available_only": true}' \
127.0.0.1:50051 zisk.distributed.api.v1.ZiskDistributedApi/WorkersList
Troubleshooting
Common Issues
Worker can't connect to coordinator:
- Verify coordinator is running and accessible on the specified port
- Check firewall settings if coordinator and worker are on different machines
- Ensure correct URL format:
http://host:port(nothttps://for default setup)
Configuration not loading:
- Verify TOML syntax with a TOML validator
- Check file permissions on configuration files
- Use CLI overrides to test specific values
Worker not receiving tasks:
- Check worker registration in coordinator logs
- Verify compute capacity is appropriate for available tasks
- Ensure worker ID is unique if running multiple workers
- Confirm coordinator has active jobs to distribute
Input file not found errors:
- Verify the input file exists in the worker's
--inputs-folderdirectory - Check file permissions - worker needs read access to input files
- Ensure you're using the filename only (not full path) when launching proofs
- Confirm
--inputs-folderpath is correct and accessible
Port conflicts:
- Use
--portflag or update configuration file to change ports - Check for other services using the same ports
Debug Mode
Enable detailed logging for troubleshooting by modifying configuration files or using CLI arguments:
# Coordinator with debug logging (via config file)
cargo run --release --bin zisk-coordinator -- --config debug-coordinator.toml
# Worker with debug logging (via config file)
cargo run --release --bin zisk-worker -- --config debug-worker.toml
Where debug-coordinator.toml or debug-worker.toml contains:
[logging]
level = "debug"
format = "pretty"
Log Files
When file logging is enabled, logs are written into specified paths in the configuration files. Ensure the application has write permissions to these paths.
[logging]
file_path = "/var/log/distributed/coordinator.log"
Hints Stream
The hints stream accelerates proof generation by offloading expensive operations outside the zkVM execution, then feeding the results back as verifiable data through a high-performance, parallel pipeline. Hints are preprocessed results that allow operations to be handled externally while remaining fully verifiable inside the VM. The system supports two categories of hints:
- Precompile hints: Cryptographic operations (SHA-256, Keccak-256, elliptic curve operations, pairings, etc.) that are computationally expensive inside a zkVM.
- Input hints: Data that needs to be passed to the zkVM as input during execution.
The system is designed around three core principles:
- Pre-computing results outside the VM: The guest program emits hint requests describing the operation and its inputs.
- Streaming results back: A dedicated pipeline processes these requests in parallel, maintaining order, and feeds results to the prover via shared memory.
- Verifying inside the VM: The zkVM circuits verify that the precomputed results are correct, avoiding the cost of computing them inside the zkVM.
flowchart LR
A["Guest program<br/><small>Emits hints request</small>"] --> B["ZiskStream"]
B --> C["HintsProcessor<br/><small>Parallel engine</small>"]
C --> D["StreamSink<br/><small>ASM emulator/file output</small>"]
Table of Contents
- Hint Format and Protocol
- Hints in CLI Execution
- Hints in Distributed Execution
- Custom Hint Handlers
- Generating Hints in Guest Programs
1. Hint Format and Protocol
1.1. Hint Request Format
Hints are transmitted as a stream of u64 values. Each hint request consists of a header (1 u64) followed by data (N u64 values).
┌─────────────────────────────────────────────────────────────┐
│ Header (u64) │
├·····························································┤
│ Hint Code (32 bits) Length (32 bits). │
├─────────────────────────────────────────────────────────────┤
│ Data[0] (u64) │
├─────────────────────────────────────────────────────────────┤
│ Data[1] (u64) │
├─────────────────────────────────────────────────────────────┤
│ ... │
├─────────────────────────────────────────────────────────────┤
│ Data[N-1] (u64) │
└─────────────────────────────────────────────────────────────┘
where N = ceil(Length / 8)
- Hint Code (upper 32 bits): Control code or Data Hint Type
- Length (lower 32 bits): Payload data size in bytes. The last
u64may contain padding bytes.
1.2. Control Hint Types:
The following control codes are defined:
0x00(START): Start a new hint stream. Resets processor state and sequence counters. Must be the first hint in the first batch.0x01(END): End the current hint stream. The processor will wait for all pending hints to be processed before returning. Must be the last hint in its batch; only aCTRL_STARTmay follow in a subsequent batch.0x02(CANCEL): [Reserved for future use] Cancel current stream and stop processing further hints.0x03(ERROR): [Reserved for future use] Indicate an error has occurred; stop processing further hints.
Control codes are for control only and do not have any associated data (Length should be zero).
1.3. Data Hint Types
For data hints, the hint code (32 bits) is structured as follows:
- Bit 31 (MSB): Pass-through flag. When set, the data bypasses computation and is forwarded directly to the sink.
- Bits 0-30: The hint type identifier (control, built-in, or custom code).
(e.g.,
HINT_SHA256,HINT_BN254_G1_ADD,HINT_SECP256K1_RECOVER, etc.)
Example: A SHA-256 hint (0x0100) with a 32-byte input:
Header: 0x00000100_00000020
Data[0]: first_8_input_bytes_as_u64
Data[1]: next_8_input_bytes_as_u64
Data[2]: next_8_input_bytes_as_u64
Data[3]: last_8_input_bytes_as_u64
The same hint with the pass-through flag set (bit 31), forwarding pre-computed data directly to the sink without invoking the SHA-256 handler:
Header: 0x80000100_00000020
1.3.1 Stream Batching
The hints protocol supports chunking for individual hints that exceed the transport’s message size limit (currently 128 KB). Each message in the stream contains either a single complete hint or one chunk of a larger hint — hints are never combined in the same message.
When a hint exceeds the size limit, it must be split into multiple sequential chunks, each sent as a separate message. Each chunk includes a header specifying the total length of the complete hint, allowing the receiver to reassemble all chunks before processing. For example, a hint with a 300 KB payload would be split into three messages:
Message 2: Header (code + total length), Data[0..N] (second 128 KB chunk)
Message 3: Header (code + total length), Data[0..M] (final 44 KB chunk)
The receiver buffers incoming chunks and reassembles them based on the total length specified in the header before invoking the hint handler. This allows the system to handle arbitrarily large hints while respecting transport limitations.
1.3.2 Pass-Through Hints
When bit 31 of the hint code is set (e.g., 0x8000_0000 | actual_code), the hint is marked as pass-through:
- The data payload is forwarded directly to the sink without invoking any handler.
- No worker thread is spawned; the data is queued immediately in the reorder buffer.
- This is useful for pre-computed results that don't need processing.
1.4. Hint Code Types
| Category | Code Range | Description |
|---|---|---|
| Control | 0x0000-0x000F | Stream lifecycle management |
| Built-in | 0x0100-0x0800 | Cryptographic precompile operations |
| Input | 0xF0000 | Input data hints |
| Custom | User-defined | Application-specific handlers |
Note: Custom hint codes can technically use any value not occupied by control or built-in codes. By convention, codes
0xA000-0xFFFFare recommended for custom use to avoid future conflicts as new built-in types are added. The processor does not enforce a range restriction — any unrecognized code is treated as custom.
1.4.1. Control Codes
Control codes manage the stream lifecycle and do not carry computational data:
| Code | Name | Description |
|---|---|---|
0x0000 | CTRL_START | Resets processor state. Must be the first hint in the first batch. |
0x0001 | CTRL_END | Signals end of stream. Blocks until all pending hints complete. Must be the last hint. |
0x0002 | CTRL_CANCEL | [Reserved for future use] Cancels the current stream. Sets error flag and stops processing. |
0x0003 | CTRL_ERROR | [Reserved for future use] External error signal. Sets error flag and stops processing. |
1.4.2. Built-in Hint Types
| Code | Name | Description |
|---|---|---|
0x0100 | Sha256 | SHA-256 hash computation |
0x0200 | Bn254G1Add | BN254 G1 point addition |
0x0201 | Bn254G1Mul | BN254 G1 scalar multiplication |
0x0205 | Bn254PairingCheck | BN254 pairing check |
0x0300 | Secp256k1EcdsaAddressRecover | Secp256k1 ECDSA address recovery |
0x0301 | Secp256k1EcdsaVerifyAddressRecover | Secp256k1 ECDSA verify + address recovery |
0x0380 | Secp256r1EcdsaVerify | Secp256r1 (P-256) ECDSA verification |
0x0400 | Bls12_381G1Add | BLS12-381 G1 point addition |
0x0401 | Bls12_381G1Msm | BLS12-381 G1 multi-scalar multiplication |
0x0405 | Bls12_381G2Add | BLS12-381 G2 point addition |
0x0406 | Bls12_381G2Msm | BLS12-381 G2 multi-scalar multiplication |
0x040A | Bls12_381PairingCheck | BLS12-381 pairing check |
0x0410 | Bls12_381FpToG1 | BLS12-381 map field element to G1 |
0x0411 | Bls12_381Fp2ToG2 | BLS12-381 map field element to G2 |
0x0500 | ModExp | Modular exponentiation |
0x0600 | VerifyKzgProof | KZG polynomial commitment proof verification |
0x0700 | Keccak256 | Keccak-256 hash computation |
0x0800 | Blake2bCompress | Blake2b compression function |
1.4.3. Input Hint Type
Input hints allow passing data to the zkVM during execution. Unlike precompile hints that are processed by worker threads, input hints are forwarded directly to a separate inputs sink.
| Code | Name | Description |
|---|---|---|
0xF0000 | Input | Input data for the zkVM |
The input hint payload format is:
- First 8 bytes: Length of the input data (as
u64little-endian) - Remaining bytes: The actual input data, padded to 8-byte alignment
Input hints are not processed by the parallel worker pool; instead, they are immediately submitted to the inputs sink for consumption by the zkVM.
1.4.4. Custom Hint Types
Custom hint types allow users to define their own hint handlers for application-specific logic. Users can register custom handlers via the HintsProcessor builder API, providing a mapping from hint code to a processing function (see Custom Hint Handlers). By convention, codes in the range 0xA000-0xEFFFF are recommended for custom use to avoid conflicts with current and future built-in types. If a data hint is received with an unregistered code, the processor returns an error and stops processing immediately.
1.5. Stream Protocol
A valid hint stream follows this protocol:
CTRL_START ← Reset state, begin stream
[Hint_1] [Hint_2] ... [Hint_N] ← Data hints (precompile, input, or custom)
CTRL_END ← Wait for completion, end stream
2. Hints in CLI Execution
There are four CLI commands (execute, prove, verify-constraints, stats) that support hints stream system by providing a URI via the --hints option. The URI determines the input stream source for hints, which can be a file, Unix socket, QUIC stream, or other custom transport.
The supported schemes are:
--hints file://path → File stream reader
--hints unix://path → Unix socket stream reader
--hints quic://host:port → Quic stream reader
--hints (plain path) → File stream reader
Note: Only ASM mode supports hints. The emulator mode does not use the hints pipeline.
3. Hints in Distributed Execution
In the distributed proving system, hints are received by the coordinator and broadcasted to all workers via gRPC. The coordinator runs a relay that validates incoming hint messages, assigns sequence numbers for ordering, and dispatches them to workers asynchronously. Workers buffer incoming messages and reorder them by sequence number before processing. The processed hints are then submitted to the sink in the correct order.
There is another mode where workers can load hints from a local path/URI instead of streaming from the coordinator, which is useful for debugging.
3.1. Architecture
flowchart TD
A["Guest program<br/><small>Emits hints request</small>"] --> B
subgraph H["Coordinator"]
B["ZiskStream"]
B --> C["Hints Relay<br/><small>Validates<br>Broadcast to all workers (async)</small>"]
end
C --> E["Worker 1<br/><small>Stream incoming hints + Reorder</small>"]
C --> F["Worker 2<br/><small>Stream incoming hints + Reorder</small>"]
C --> G["Worker N<br/><small>Stream incoming hints + Reorder</small>"]
E --> E1["HintsProcessor<br/><small>Parallel engine</small>"]
E1 --> E2["StreamSink<br/><small>ASM emulator/file output</small>"]
F --> F1["HintsProcessor<br/><small>Parallel engine</small>"]
F1 --> F2["StreamSink<br/><small>ASM emulator/file output</small>"]
G --> G1["HintsProcessor<br/><small>Parallel engine</small>"]
G1 --> G2["StreamSink<br/><small>ASM emulator/file output</small>"]
style H fill:transparent,stroke-dasharray: 5 5
When the coordinator receives a hint request from the guest program, it parses the incoming u64 stream, validates control codes, assigns sequence numbers for ordering, and broadcasts the data to all workers.
Three message types are sent over gRPC to workers:
| StreamMessageKind | When | Payload |
|---|---|---|
Start | On CTRL_START | None |
Data | For each data batch | Sequence number + raw bytes |
End | On CTRL_END | None |
Each worker receives the stream of hints, buffers them if they arrive out of order, and sends them to the HintsProcessor for parallel processing. The HintsProcessor ensures that results are submitted to the sink in the original order.
3.2. Hints Mode Configuration
When starting a worker, if the --hints option is provided, the worker prepares to receive hints from the coordinator.
When launching a proof generation job where hints will be provided, the workers must be started to receive and process hints.
A hints stream system can be configured in two ways:
- Streaming mode: Workers receive hints from the coordinator via gRPC. This is the default and recommended mode for production, as it allows real-time processing of hints as they are generated.
- Path mode: Workers load hints from a local path/URI. This is useful for debugging or when hints are pre-generated and stored in a file. In this mode, the coordinator does not send hints to workers; instead, each worker reads the hints directly from the specified path.
3.2.1 Coordinator Hints Streaming Mode
To start the coordinator in streaming mode, provide the --hints-uri option with a URI that the coordinator will connect to, and set --stream-hints to enable broadcasting to workers. The URI determines the input stream source for hints.
The supported schemes are:
--hints-uri file://path → File stream reader
--hints-uri unix://path → Unix socket stream reader
--hints-uri quic://host:port → Quic stream reader
--hints-uri (plain path) → File stream reader
Example to launch a prove command in streaming mode:
zisk-coordinator prove --hints-uri unix:///tmp/hints.sock --stream-hints ...
3.2.2 Worker Hints non-Streaming Mode
To start a worker in non-streaming mode, provide the --hints-uri option with a URI that points to the local workers path where hints are stored, without the --stream-hints option. In this mode the worker(s) will load hints from the specified URI instead of receiving them from the coordinator. This mode is useful for debugging or when hints are pre-generated and stored in a file.
4. Custom Hint Handlers
Register custom handlers via the builder pattern:
#![allow(unused)] fn main() { let processor = HintsProcessor::builder(my_sink) .custom_hint(0xA000, |data: &[u64]| -> Result<Vec<u64>> { // Custom processing logic Ok(vec![data[0] * 2]) }) .custom_hint(0xA001, |data| { // Another custom handler Ok(transform(data)) }) .build()?; }
Requirements:
- Handler function must be
Fn(&[u64]) -> Result<Vec<u64>> + Send + Sync + 'static. - Custom hint codes should not conflict with built-in codes (
0x0000-0x0700). By convention, use codes in the range0xA000-0xFFFF.
5. Generating Hints in Guest Programs
To generate hints from the guest program you need to follow these steps and requirements:
- Emit hint requests: Patch your code or dependent crates to call the external FFI Hints helper functions that generate the hints input data required later by the
HintsProcessor. See FFI Hints Helper Functions for the list of available built-in FFI Hints helper functions, or Custom Hints Generation to learn how to generate custom hints from the guest program. - Add the
ziskoscrate to your guestCargo.toml. - Initialize and finalize the hint stream: Call the hints init and close functions immediately before and after the section of code that executes precompile logic.
- Enable hints at compile time: Compile your guest program with
RUSTFLAGS='--cfg zisk_hints'for the native target to activate hint code generation and FFI helper functions in theziskoscrate. - Ensure deterministic execution: Verify that both the native execution that generates hints and the guest compiled for the
zkvm/zisktarget execute deterministically and produce/consume hints in the exact same order. See Deterministic Execution Requirement.
To illustrate these steps, consider the zec-reth guest program, which executes and verifies Ethereum Mainnet blocks using the ZisK zkVM:
https://github.com/0xPolygonHermez/zisk-eth-client/tree/main-reth/bin/guest
5.1 Emit Hint Requests
zec-reth relies on reth crates, which expose a Crypto trait that allows a guest program to override precompile implementations. This enables zkVM-optimized implementations while also emitting hints so the computation can be performed outside the zkVM.
For example, the BN254 elliptic curve addition (bn254_g1_add) implementation for the Crypto trait can be found here:
https://github.com/0xPolygonHermez/zisk-eth-client/blob/86b71b39d35efb9894696cab115a1177f3e47dbf/crates/guest-reth/src/crypto/impls.rs#L87
In that file, two target-specific implementations are provided: one for zkvm/zisk and one for native (non-zkVM) targets. When compiling with --cfg zisk_hints for the native target, the zkVM-specific implementation emits a hint request using the FFI helper:
#![allow(unused)] fn main() { #[cfg(zisk_hints)] unsafe { pub fn hint_bn254_g1_add(p1: *const u8, p2: *const u8); } }
This call generates the hint input data using the exact input values that will later be used by the ZisK zkVM when executing the zkvm/zisk target code. This hint input data is consumed later by the HintsProcessor, allowing the bn254_g1_add computation to be performed outside the zkVM while remaining fully verifiable inside the circuit.
After the hint generation, execution continues in the native target code to compute the bn254_g1_add result.
From the guest program, we generate hints containing the input data for the corresponding zisklib functions (in this example, the bn254_g1_add_c function). These zisklib functions may internally invoke one or more precompiles to produce the final result.
When the hints are processed by the HintsProcessor, it executes the same zisklib function using the implementation code for the zkvm/zisk target. This produces the exact precompile results expected when executing the guest ELF inside the zkVM.
As a result, for each zisklib function invocation, the HintsProcessor may generate one or more precompile hint results corresponding to the precompile inputs originally emitted by the guest.
5.2 Initialize/Finalize Hint Stream
To start hints generation from your guest program you must call one of the following functions from the ziskos::hints crate:
#![allow(unused)] fn main() { pub fn init_hints_file(hints_file_path: PathBuf, ready: Option<oneshot::Sender<()>>) -> Result<()> }
This function stores the generated hints in the file specified by the hints_file_path parameter.
#![allow(unused)] fn main() { pub fn init_hints_socket(socket_path: PathBuf, debug_file: Option<PathBuf>, ready: Option<oneshot::Sender<()>>) -> Result<()> }
This function sends the hints through the Unix socket specified by the socket_path parameter.
The optional ready parameter can be used for synchronization with the host when the guest program is executed in a separate thread to generate hints in parallel. It signals ready when the hints generation is ready to start writing hints through the Unix socket.
The optional debug_file parameter can be used to store, in the specified file, a copy of the hints sent through the socket. This file can later be used for debugging purposes.
To close hints generation you must call:
#![allow(unused)] fn main() { pub fn close_hints() -> Result<()> }
You should call these functions only when the guest is compiled for the native target used for hints generation. This can be achieved by placing the code under the following configuration flag:
#![allow(unused)] fn main() { #[cfg(zisk_hints)] { // Initialization/Finalize Hints generation code ... } }
You can review how hints generation is initialized and finalized in the zec-reth guest here:
https://github.com/0xPolygonHermez/zisk-eth-client/blob/main-reth/bin/guest/src/main.rs
5.3 Enable Hints at Compile Time
Once the guest program is set up to generate hints for the native target, it must be compiled with the zisk_hints configuration flag enabled:
RUSTFLAGS='--cfg zisk_hints' cargo build --release
After compiling, executing the guest program will generate the hints binary file at the specified location (if init_hints_file was used) or start writing hints to the specified Unix socket (if init_hints_socket was used).
If a hints file was generated, it can be consumed using the --hints flag in the cargo-zisk commands that support hints (as explained in Hints in CLI Execution).
If you want to display metrics in the console about the number of hints generated during native guest execution, you can additionally compile the guest with the --cfg zisk_hints_metrics flag.
To enable hint support when executing the guest inside the zkVM (ELF guest), you must pass the --hints flag when generating the assembly ROM using the cargo-zisk rom-setup command.
NOTE: Hint processing is not supported when executing the guest ELF file in emulation mode.
5.4 Deterministic Execution Requirement
An important requirement of the hints generation flow is that the native execution that generates the hints must be fully deterministic and always produce hints in the exact same order.
Furthermore, the order of hints generated during native execution must match the order in which the guest program compiled for the zkvm/zisk target expects to receive them. Since the zkVM execution is also deterministic, any divergence in hint ordering between native execution and zkVM execution will result in incorrect behavior.
To guarantee deterministic hint generation, the code paths that directly or indirectly generate hints must avoid:
- The use of threads or parallel execution.
- Data structures such as
HashMap(or any structure based on randomized hash seeds) when iterated in loops that directly or indirectly call precompile/hint functions.
Using threads or iterating over non-deterministically ordered data structures may cause the hint generation order to vary between runs, breaking the required alignment between native and zkVM executions.
5.5 FFI Hints Helper Functions
| Code | Function |
|---|---|
0x0100 | fn hint_sha256(f_ptr: *const u8, f_len: usize); |
0x0200 | fn hint_bn254_g1_add(p1: *const u8, p2: *const u8); |
0x0201 | fn hint_bn254_g1_mul(point: *const u8, scalar: *const u8); |
0x0205 | fn hint_bn254_pairing_check(pairs: *const u8, num_pairs: usize); |
0x0300 | fn hint_secp256k1_ecdsa_address_recover(sig: *const u8, recid: *const u8, msg: *const u8); |
0x0301 | fn hint_secp256k1_ecdsa_verify_and_address_recover(sig: *const u8, msg: *const u8, pk: *const u8); |
0x0380 | fn hint_secp256r1_ecdsa_verify(msg: *const u8, sig: *const u8, pk: *const u8); |
0x0400 | fn hint_bls12_381_g1_add(a: *const u8, b: *const u8); |
0x0401 | fn hint_bls12_381_g1_msm(pairs: *const u8, num_pairs: usize); |
0x0405 | fn hint_bls12_381_g2_add(a: *const u8, b: *const u8); |
0x0406 | fn hint_bls12_381_g2_msm(pairs: *const u8, num_pairs: usize); |
0x040A | fn hint_bls12_381_pairing_check(pairs: *const u8, num_pairs: usize); |
0x0410 | fn hint_bls12_381_fp_to_g1(fp: *const u8); |
0x0411 | fn hint_bls12_381_fp2_to_g2(fp2: *const u8); |
0x0500 | fn hint_modexp_bytes(base_ptr: *const u8, base_len: usize, exp_ptr: *const u8, exp_len: usize, modulus_ptr: *const u8, modulus_len: usize); |
0x0600 | fn hint_verify_kzg_proof(z: *const u8, y: *const u8, commitment: *const u8, proof: *const u8); |
0x0700 | fn hint_keccak256(input_ptr: *const u8, input_len: usize); |
0x0800 | fn hint_blake2b_compress(...); |
0xF0000 | fn hint_input_data(input_data_ptr: *const u8, input_data_len: usize); |
5.6 Custom Hints Generation
To extend the built-in hints, you can generate custom hints for new operations. The first step is to register the new hint in the HintsProcessor, as explained in section Custom Hint Handlers. Once the hint is registered, you can generate hints for it from the guest program using the following FFI function:
#![allow(unused)] fn main() { fn hint_custom(hint_id: u32, data_ptr: *const u8, data_len: usize, is_result: u8); }
and following the same guidelines described for the built-in FFI hint helper functions.
Ziskof
Riscof tests
The following test generates the riscof test files, converts the corresponding .elf files into ZisK ROMs, and executes them providing the output in stdout for comparison against a reference RISCV implementation. This process is not trivial and has been semi-automatized.
First, compile the ZisK Emulator:
$ cargo clean
$ cargo build --release
Second, download and run a docker image from the riscof repository to generate and run the riscof tests:
$ docker run --rm -v ./target/release/ziskemu:/program -v ./riscof/:/workspace/output/ -ti hermeznetwork/ziskof:latest
The test can take a few minutes to complete. Any error would be displayed in red.