This is a reference vignette of the package’s core functionality. Other package vignettes cover additional features.
mirai (Japanese for ‘future’) implements the concept of futures in R.
Futures represent results from code that will complete later. Code executes in a separate R process (daemon) and returns results to the main process (host).
mirai() creates a mirai object from an expression.
It returns immediately without blocking. While the expression evaluates on a daemon, the host process continues working.
Expressions must be self-contained:
:: or loaded via library() within the expression.... or .args.This explicit design perfectly matches message-passing parallelism - attempting to infer global variables introduces unreliability, which we do not compromise on.
This example mimics an expensive calculation:
library(mirai)
m <- mirai(
{
Sys.sleep(time)
rnorm(5L, mean)
},
time = 2L,
mean = 4.5
)
m
#> < mirai [] >
m$data
#> 'unresolved' logi NA
unresolved(m)
#> [1] TRUE
# Do work whilst unresolved
m[]
#> [1] 5.270938 5.500935 5.818627 3.365140 5.994334
m$data
#> [1] 5.270938 5.500935 5.818627 3.365140 5.994334
A mirai is unresolved until its result is received, then resolved.
Use unresolved() to check its state.
Access results via m$data once resolved.
This will be the return value, or an ‘errorValue’ if the expression errored, crashed, or timed out (see Error Handling).
Use m[] to efficiently wait for and collect the value instead of repeatedly checking unresolved(m).
You may also wait efficiently for mirai (or lists of mirai) to resolve using:
call_mirai() returns when all the mirai passed to it have resolved.race_mirai() returns when the first mirai passed to it has resolved.For programmatic use, ‘.expr’ accepts a pre-constructed language object and ‘.args’ accepts a named list of arguments. The following is equivalent:
expr <- quote({Sys.sleep(time); rnorm(5L, mean)})
args <- list(time = 2L, mean = 4)
m1 <- mirai(.expr = expr, .args = args)
m1[]
#> [1] 3.890969 4.009599 3.380416 3.514127 4.899060
This example performs an asynchronous write operation.
Passing environment() to ‘.args’ conveniently provides all objects from the calling environment (like x and file):
write.csv.async <- function(x, file) {
mirai(write.csv(x, file), .args = environment())
}
m <- write.csv.async(x = rnorm(1e6), file = tempfile())
while (unresolved(m)) {
cat("Writing file...\n")
Sys.sleep(0.5) # or do other work
}
#> Writing file...
#> Writing file...
cat("Write complete:", is.null(m$data))
#> Write complete: TRUE
When writing a mirai() call, don’t worry about where or how it executes.
End-users declare available resources using daemons().
Without daemons configured, each mirai() call creates a new local background process (ephemeral daemon).
daemons() sets up persistent daemons to evaluate mirai expressions:
See local daemons for setup instructions.
Errors return as a character string with classes ‘miraiError’ and ‘errorValue’.
Use is_mirai_error() to test for errors:
m1 <- mirai(stop("occurred with a custom message", call. = FALSE))
m1[]
#> 'miraiError' chr Error: occurred with a custom message
m2 <- mirai(mirai::mirai())
m2[]
#> 'miraiError' chr Error in mirai::mirai(): missing expression, perhaps wrap in {}?
is_mirai_error(m2$data)
#> [1] TRUE
is_error_value(m2$data)
#> [1] TRUE
Error objects include $stack.trace for full stack traces and $condition.class for original condition classes:
f <- function(x) if (x > 0) stop("positive")
m3 <- mirai({f(-1); f(1)}, f = f)
m3[]
#> 'miraiError' chr Error in f(1): positive
m3$data$stack.trace
#> [[1]]
#> .handleSimpleError(function (cnd)
#> {
#> `[[<-`(., "syscalls", sys.calls())
#> }, "positive", base::quote(f(1)))
#>
#> [[2]]
#> stop("positive")
#>
#> [[3]]
#> f(1)
m3$data$condition.class
#> [1] "simpleError" "error" "condition"
Original error condition elements and rlang::abort() metadata are preserved:
f <- function(x) if (x > 0) stop("positive")
m4 <- mirai(rlang::abort("aborted", meta_uid = "UID001"))
m4[]
#> 'miraiError' chr Error: aborted
m4$data$meta_uid
#> [1] "UID001"
User interrupts resolve to class ‘miraiInterrupt’ and ‘errorValue’.
Use is_mirai_interrupt() to test for interrupts:
m4 <- mirai(rlang::interrupt()) # simulates a user interrupt
is_mirai_interrupt(m4[])
#> [1] TRUE
Timeouts (via ‘.timeout’) resolve to ‘errorValue’ of 5L, guarding against hanging processes:
m5 <- mirai(nanonext::msleep(1000), .timeout = 500)
m5[]
#> 'errorValue' int 5 | Timed out
is_mirai_error(m5$data)
#> [1] FALSE
is_mirai_interrupt(m5$data)
#> [1] FALSE
is_error_value(m5$data)
#> [1] TRUE
is_error_value() tests for all mirai execution errors, user interrupts and timeouts.
Daemons are persistent background processes that receive mirai() requests.
Daemons inherit system configuration (’.Renviron’, ‘.Rprofile’) and load default packages. To load only the base package (cutting startup time in half), set
R_SCRIPT_DEFAULT_PACKAGES=NULLbefore launching.
Specify the number of daemons to launch:
daemons(6)
For CPU-bound work, set n to roughly one less than your number of CPU cores, leaving a core free for the host R process and OS.
Account for any cores reserved for other purposes.
For I/O-bound work (waiting on network, disk, or subprocess), n can exceed core count since daemons spend most of their time idle.
Each local daemon runs a full R process, so check that per-daemon memory footprint times n fits in host RAM.
The default dispatcher = TRUE enables optimal first-in-first-out (FIFO) scheduling.
Tasks queue at the dispatcher and send to daemons as they become available.
The memory argument caps the approximate total memory (MB, metric — 1 MB = 1,000,000 bytes) of queued task payloads at dispatcher.
New tasks block until existing ones are dispatched, providing memory-based backpressure to prevent the host process from running out of memory.
Current usage is surfaced under the memory field of status() (in MB, matching the argument unit).
It also enables mirai cancellation via stop_mirai() or the .timeout argument to mirai().
info() provides current statistics as an integer vector:
connections: currently active daemonscumulative: total daemons ever connectedawaiting: tasks queued at dispatcherexecuting: tasks currently evaluatingcompleted: tasks completed or cancelledinfo()
#> connections cumulative awaiting executing completed
#> 6 6 0 0 0
For a fuller picture as a list — including the listening URL and queue memory pressure — use status().
Set daemons to zero to reset. This reverts to creating a new background process per request.
daemons(0)
With dispatcher = FALSE, daemons connect directly to the host process:
daemons(6, dispatcher = FALSE)
Tasks send immediately in round-robin fashion, ensuring even distribution. However, scheduling isn’t optimal since task duration is unknown beforehand. Tasks may queue behind long-running tasks while other daemons sit idle.
This resource-light approach suits similar-length tasks or when concurrent tasks don’t exceed available daemons.
Info now shows 6 connections:
info()
#> connections cumulative awaiting executing completed
#> 6 NA NA NA NA
everywhere() evaluates expressions on all daemons and persists state regardless of cleanup settings:
everywhere(library(DBI))
This keeps the DBI package loaded.
You can also set up common resources like database connections:
everywhere(con <<- dbConnect(RSQLite::SQLite(), file), file = tempfile())
Super-assignment makes ‘con’ available globally in all daemons:
mirai(exists("con"))[]
#> [1] TRUE
Disconnect everywhere:
everywhere(dbDisconnect(con))
To evaluate in the global environment of each daemon (since mirai evaluations occur in an environment inheriting from global), use
evalq(envir = globalenv()). Example withbox::use():
everywhere(
evalq(
box::use(dplyr[select], mirai[...]),
envir = globalenv()
)
)
daemons(0)
This section covers two complementary tools: queue backpressure at the dispatcher, and shared memory to avoid copying large objects to local daemons.
Queue backpressure applies only in dispatcher mode (dispatcher = TRUE, the default), where all tasks queue at a single host process.
Under dispatcher = FALSE, tasks distribute round-robin to daemons and each daemon holds its own backlog — memory pressure spreads across daemon processes rather than concentrating at the host.
Each mirai() call serialises its arguments and hands them to the dispatcher, which holds them until a daemon is free.
By default the queue is unbounded, so passing large objects or submitting faster than daemons can consume risks host out-of-memory.
The memory argument to daemons() caps the approximate total payload of queued tasks at dispatcher (in MB, metric — 1 MB = 1,000,000 bytes).
New mirai() calls block until queued bytes drop below this threshold, providing memory-based backpressure.
daemons(2, memory = 100)
Inspect current and peak usage via the memory field of status():
status()$memory
#> used peak capacity
#> 0 0 100
used is current and peak is the high-watermark queued payload, both in MB.
Profile a representative workload with memory = NULL first (where capacity reports NA) to capture organic demand, then set memory at or above the observed peak.
If profiling isn’t practical, treat memory as a fraction of host RAM rather than the whole of it.
With local daemons, the same machine runs the host R process, the dispatcher, and n daemon processes — and each daemon holds an in-flight payload copy while executing — so total memory pressure scales with n.
A reasonable starting point is ps::ps_system_memory()[["avail"]] / 2e6 (half of currently available RAM, in MB), revised down if n is large or payloads are big.
With remote daemons only the host and dispatcher consume local RAM, so the budget can be more generous.
Blocking the host R thread is acceptable in batch scripts, but not in event-loop contexts.
A Shiny session that calls mirai() from inside an ExtendedTask can’t afford to block while the queue drains — that same session is also driving the UI for all users.
try_mirai() returns immediately when the queue is full, instead of blocking:
mirai() and returns a mirai.NULL invisibly without blocking, leaving the caller to decide what to do next.m <- try_mirai(rnorm(10)) %||% stop("queue full")
The three response strategies are: drop the task (best when the work is idempotent and frequent), retry later (queue behind a later::later() call), or propagate backpressure upstream by raising a condition (as in the example above).
Which is right is application-specific.
With memory unset or no dispatcher, try_mirai() always returns a mirai and behaves identically to mirai() — safe to use unconditionally; it only diverges in the bounded-queue case.
daemons(memory = ...) and try_mirai() together are the canonical event-loop combination: set the cap to what the session can afford to hold, submit through try_mirai(), and the application adapts to load instead of locking up.
daemons(0)
Each mirai() call serialises its arguments to the daemon and the result back, even when the daemon is on the same machine.
For large objects, this copy can dominate evaluation time.
The mori package provides shared-memory R objects that local daemons read in place, without copying.
Wrap an atomic vector, list, or dataframe with mori::share():
library(lobstr)
daemons(4)
x <- mori::share(rnorm(1e6))
m <- mirai(list(size = lobstr::obj_size(y), sum = sum(y)), y = x)
m[]
#> $size
#> 840 B
#>
#> $sum
#> [1] -1208.687
daemons(0)
Only a reference to the shared memory is serialised across the wire; the daemon accesses the data directly via ALTREP (R’s alternative representation system).
lobstr::obj_size(y) on the daemon reflects just the wrapper, confirming the 8 MB vector was never copied across.
Shared objects are local-machine only and cannot be transferred to remote daemons — use regular argument-passing for distributed work.
share() and memory = compose: a share()-wrapped argument shrinks the queued payload to just a reference, sidestepping backpressure on the input side.
A daemon can also share() its return value for symmetric zero-copy on the result side.
mirai_map() performs asynchronous parallel mapping over lists or vectors.
Requires
daemons()to be set (avoids launching too many ephemeral daemons).
Returns immediately. Collect results with x[]:
with(daemons(3, seed = 1234L), mirai_map(1:3, rnorm, .args = list(mean = 20, sd = 2))[])
#> [[1]]
#> [1] 19.86409
#>
#> [[2]]
#> [1] 19.55834 22.30159
#>
#> [[3]]
#> [1] 20.62193 23.06144 19.61896
Use .args for constant arguments to .f, and ... for objects referenced in .f:
daemons(4, seed = 2345L)
fn <- function(x, range) runif(x, x, x + range)
ml <- mirai_map(c(a = 1, b = 2, c = 3), \(x) fn(x, x * 2), fn = fn)
ml
#> < mirai map [0/3] >
ml[]
#> $a
#> [1] 2.637793
#>
#> $b
#> [1] 2.328183 5.649959
#>
#> $c
#> [1] 5.302906 3.531788 6.389231
x[.flat] flattens results (checks types to avoid coercion)x[.progress] shows progress bar (via cli) or text indicatorx[.stop] applies early stopping, cancelling remaining tasks on first failuremirai_map(list(a = 1, b = "a", c = 3), function(x) exp(x))[.stop]
#> Error in `mirai_map()`:
#> ℹ In index: 2.
#> ℹ With name: b.
#> Caused by error in `exp()`:
#> ! non-numeric argument to mathematical function
mirai_map(c(0.1, 0.2, 0.3), Sys.sleep)[.progress, .flat]
#> NULL
Dataframes and matrices map over rows.
.f must accept as many arguments as there are columns:
fruit <- c("melon", "grapes", "coconut")
df <- data.frame(i = seq_along(fruit), fruit = fruit)
mirai_map(df, sprintf, .args = list(fmt = "%d. %s"))[.flat]
#> [1] "1. melon" "2. grapes" "3. coconut"
Matrices also map over rows:
mat <- matrix(1:4, nrow = 2L, dimnames = list(c("a", "b"), c("y", "z")))
mirai_map(mat, function(x = 10, y = 0, z = 0) x + y + z)[.flat]
#> a b
#> 14 16
daemons(0)
To map over columns instead, use
as.list()for dataframes ort()for matrices.
For nested mapping, don’t launch local daemons from within mirai_map().
Instead:
daemons(url = local_url())
launch_local(n)
This section covers setting up remote daemons, launching them on remote machines, and securing connections with TLS.
Remote daemons run on network machines to process tasks remotely.
Call daemons() with a ‘url’ (e.g., ‘tcp://10.75.32.70:5555’) or use host_url() to construct one automatically.
The host listens on a single port for daemons to connect.
IPv6 addresses are also supported and must be enclosed in square brackets
[]to avoid confusion with the final colon separating the port. For example, port 5555 on the IPv6 address::ffff:a6f:50dwould be specified astcp://[::ffff:a6f:50d]:5555.
Calling host_url() without a port uses ‘0’, which automatically assigns a free ephemeral port:
daemons(url = host_url())
Query launch_remote() for the assigned port:
launch_remote()
#> [1]
#> Rscript -e 'mirai::daemon("tcp://192.168.7.113:53887")'
Dynamically scale the number of daemons up or down as needed.
Reset all connections:
daemons(0)
Closing connections exits all daemons.
Launchers deploy daemons on remote machines. Once deployed, daemons connect back to the host via TCP or TLS.
Local launchers run Rscript via a local shell.
Remote launchers run Rscript on remote machines.
Supply a remote launch configuration to the ‘remote’ argument of daemons() or launch_remote().
Four configuration options:
ssh_config() for SSH accesscluster_config() for high-performance computing (HPC) resource managers (Slurm, SGE, Torque/PBS, LSF)http_config() for HTTP API launch (e.g., Posit Workbench)remote_config() for generic/custom launchersAll return simple lists that can be pre-constructed, saved, and reused.
Use for internal networks where the host can accept incoming connections. Remote daemons connect back directly to the host port.
TLS is recommended for additional security.
Launch 4 daemons on 10.75.32.90 (SSH port 22 is default):
daemons(
n = 4,
url = host_url(tls = TRUE, port = 5555),
remote = ssh_config("ssh://10.75.32.90")
)
Launch one daemon on each machine using custom SSH port 222:
daemons(
n = 1,
url = host_url(tls = TRUE, port = 5555),
remote = ssh_config(c("ssh://10.75.32.90:222", "ssh://10.75.32.91:222"))
)
Use SSH tunnelling when firewall policies prevent direct connections. Requires SSH key-based authentication to be setup.
SSH tunnelling creates a tunnel after the initial SSH connection, using the same port on both host and daemon.
Supply a ‘127.0.0.1’ URL to daemons():
local_url(tcp = TRUE) constructs this automaticallyWith local_url(tcp = TRUE, port = 5555), the host listens at 127.0.0.1:5555 and daemons dial into 127.0.0.1:5555 on their own machines.
Launch 2 daemons on 10.75.32.90 with tunnelling:
daemons(
n = 2,
url = local_url(tcp = TRUE),
remote = ssh_config("ssh://10.75.32.90", tunnel = TRUE)
)
cluster_config() deploys daemons via cluster resource managers.
Specify command:
"sbatch" for Slurm"qsub" for SGE/Torque/PBS"bsub" for LSFThe options argument accepts scheduler options (lines typically preceded by #):
Slurm: "#SBATCH --job-name=mirai
#SBATCH --mem=10G
#SBATCH --output=job.out"
SGE: "#$ -N mirai
#$ -l mem_free=10G
#$ -o job.out"
Torque/PBS: "#PBS -N mirai
#PBS -l mem=10gb
#PBS -o job.out"
LSF: "#BSUB -J mirai
#BSUB -M 10000
#BSUB -o job.out"
\n for newlinescd for working directory)#!/bin/bash)module load R
or for a specific R version:
module load R/4.5.0
The rscript argument defaults to "Rscript" (assumes R is on PATH).
Specify full path if needed: file.path(R.home("bin"), "Rscript").
For many daemons, use job arrays instead of individual jobs.
Instead of:
daemons(n = 100, url = host_url(), remote = cluster_config())
rather use:
daemons(
n = 1,
url = host_url(),
remote = cluster_config(options = "#SBATCH --array=1-100")
)
http_config() launches daemons via HTTP API.
It takes the following arguments:
url: API endpoint URLmethod: HTTP method (typically "POST")cookie: session cookie for authenticationtoken: bearer token for authenticationdata: request body containing a "%s" placeholder where the daemon launch command is insertedEach argument accepts either a character value or a function returning a value.
When a function is supplied, it is called at launch time (when launch_remote() runs), not when the configuration is created.
This lazy evaluation ensures that dynamic values such as session cookies, API tokens, or endpoint URLs are always fresh at the moment of use.
Requires Posit Workbench 2026.01 or later, which supports authenticating the launcher using the session cookie.
By default, http_config() auto-configures for Posit Workbench.
The defaults for url, cookie, and data are functions (not function calls) that read Workbench environment information:
http_config(
url = posit_workbench_url, # reads server address at launch time
method = "POST",
cookie = posit_workbench_cookie, # reads session cookie at launch time
token = NULL,
data = posit_workbench_data # queries the compute environment at launch time
)
Because these are stored as functions, calling http_config() does no work — it simply saves the functions into the configuration list.
Only when daemons are actually launched are the functions evaluated, at which point the environment variables are read and the API is queried.
This means the configuration can be created early (e.g., at session start) while credentials that may change or expire are always obtained fresh.
Launch daemons in Posit Workbench:
daemons(n = 2, url = host_url(), remote = http_config())
The default Workbench launch may be customised by supplying additional options to http_config(), which are forwarded to the data builder.
Select a named cluster and resource profile:
daemons(
n = 2,
url = host_url(),
remote = http_config(cluster = "Kubernetes", resource_profile = "rstudio")
)
Or specify custom resources in place of a named profile (4 CPUs, 8 GB memory):
daemons(
n = 2,
url = host_url(),
remote = http_config(cluster = "Kubernetes", cpus = 4, memory = 8192)
)
The full list of accepted options (rscript, job_name, cluster, resource_profile, cpus, memory) is documented at ?http_config.
For custom HTTP APIs, provide URL, authentication, and request body.
The data argument should include "%s" as a placeholder where the daemon launch command is inserted at launch time:
daemons(
n = 2,
url = host_url(),
remote = http_config(
url = "https://api.example.com/launch",
method = "POST",
token = function() Sys.getenv("MY_API_KEY"),
data = '{"command": "%s"}'
)
)
Here, token is a function so the API key environment variable is read each time daemons are launched.
The remaining arguments are plain character values used as-is.
launch_remote() with an http_config() configuration returns a list of server response data (invisibly).
Capture and inspect these to diagnose launch failures:
daemons(url = host_url())
res <- launch_remote(remote = http_config())
Each element of res is the response for a single daemon launch request.
remote_config() provides a generic framework for custom deployment commands.
The args argument must contain ".", which is replaced with the daemon launch command.
cluster_config() is easier for HPC, but remote_config() offers flexibility.
Slurm example:
daemons(
n = 2,
url = host_url(),
remote = remote_config(
command = "sbatch",
args = c("--mem 512", "-n 1", "--wrap", "."),
rscript = file.path(R.home("bin"), "Rscript"),
quote = TRUE
)
)
Call launch_remote() without ‘remote’ to get shell commands for manual deployment:
daemons(url = host_url())
launch_remote()
#> [1]
#> Rscript -e 'mirai::daemon("tcp://192.168.7.113:53888")'
daemons(0)
TLS secures communications between host and remote daemons.
Use tls+tcp:// scheme or host_url(tls = TRUE):
daemons(url = host_url(tls = TRUE))
Keys and certificates generate automatically. Private keys remain on the host.
Self-signed certificates are included in launch_remote() commands:
launch_remote(1)
#> [1]
#> Rscript -e 'mirai::daemon("tls+tcp://192.168.7.113:53889",tlscert=c("-----BEGIN CERTIFICATE-----
#> MIIFQTCCAymgAwIBAgIBATANBgkqhkiG9w0BAQsFADA4MRYwFAYDVQQDDA0xOTIu
#> MTY4LjcuMTEzMREwDwYDVQQKDAhOYW5vbmV4dDELMAkGA1UEBhMCSlAwHhcNMDEw
#> MTAxMDAwMDAwWhcNMzAxMjMxMjM1OTU5WjA4MRYwFAYDVQQDDA0xOTIuMTY4Ljcu
#> MTEzMREwDwYDVQQKDAhOYW5vbmV4dDELMAkGA1UEBhMCSlAwggIiMA0GCSqGSIb3
#> DQEBAQUAA4ICDwAwggIKAoICAQCGCUyn6GWWGAwNGQ5jhku6BWSKqRD/cgmEk7MA
#> Whsxb5nSPqsqh4ivTXJQZ1LzhcESX9eI9whwzknQ9MVCOA+Rij9EVzRu9Ypawvax
#> 1yDZmlwSG5WAUva9vGtMZirA09oMp8+8hKt9eY44DG3EikP5g7lxWwaCTsz0Q/br
#> 9YechhF458mGoDmM7yGp/L65mVsn52nMrK6QWJ7wrfqNyqzzz35IEI7PVxKk0KzD
#> bcUsFYsCg1+STNz1ADAlUtsJFD0RUg38oAUn0cBi77sLBX9TQihPWInUY51tzyvI
#> fukiiavDE5N56z3JKqTQZVk66H5kDEPz3/zZQB6YeNjV7UEua63a6FOLD/hVi2fa
#> C3S/ud2xEFudik3xVVoNf+oQbQ4aLH6YUlresFXhvlU/LKLmbyDwjzXq+QRdyhIt
#> mWJisrvK19/2cUSt1AflIt8nQYVgHhJsTxK40FLq7bGvwKPVibJYf0+8DW2VFqjj
#> Z2TMBjYom0Ynixs3quQ7j92+2o2WK6O6uL06pKiLNiJi8CD/7pqTXEHCkYGh03tL
#> 5w+00bIXWXH4OoYmtyuHjvDlTKYQVzMOTwxH9zotOvq88Ifzn5UBZH7RvDbqFyo8
#> KvsVXuVKcLL/F0tzraExI+Zv5/YjrSqK7Jxtrdd3QN1750rjnujiOjfU1iJwX20a
#> ZIc3JQIDAQABo1YwVDASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBRxU45I
#> ZGU7XK6DwHipxYuGgnsnCzAfBgNVHSMEGDAWgBRxU45IZGU7XK6DwHipxYuGgnsn
#> CzANBgkqhkiG9w0BAQsFAAOCAgEAIuzA09/e739fQ9/cwf7M6KXNHlv5Fd2aO5l/
#> g5B7h7o/bZgGghchYNAmC+e7o0QSUoS2giGx3EW1GsH587+5V6ZliBYnl5LeTlRW
#> 2OUq4odZYJvykpbyDatohnz+YzDQcdxYtOdJrHrqmjSuL2W28QnzKyYGTVbSi/4V
#> cN8QMw0i53iayWYDmHtshXOF5DmnNCF+8ffVuR+B4Tzf3XN98zZU7gwxEEIDzF5n
#> KfeVmAAUg69eQiALpJ6ZyWUFHQAd92H9Gezx8qMqCia1Iqym8Vg7aaWmXak0jYjm
#> 35Mrfn57Oej8b7epSzsWKvZu2ZuKyBJ0T2rjdpXJwqz3IrMFJDM6dV9NFDfnGiAq
#> uI64s80uRGvxZ+OtPvHD+1OMM9wcJGCfLx2INTII/AE6pWcCb2ddOfvv6JX1n54t
#> pu5woTWihH3BaI604fmt1g+CHHVzw0Rpi1uPzIumzD5B59a8BptbSPqkoy2YlOU4
#> 2c6VoY88pZvADYjzDMLVJHGax/FUDJeocN3TF+Ao2e9C0lPjrqb8i7tI49ZgNK6L
#> YeLYVjWIWm+BUGN5xef7IBS6x0Vwqgrsffwb/64g11MJx7Tk9N6oOXYEN4TbOEfn
#> evegzVN2xHKCml+R7W5/02yegUBEkzlo386v/5I4RfsqVb4yohp0HIZ6up6A+UPm
#> X1Lbtco=
#> -----END CERTIFICATE-----
#> ",""))'
daemons(0)
Alternatively, generate certificates via a Certificate Signing Request (CSR) to a Certificate Authority (public or internal).
-----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----. Make sure to request the certificate in the PEM format. If only available in other formats, the TLS library used should usually provide conversion utilities.-----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----.daemons().cert and key respectively, then the ‘tls’ argument may be specified as the character vector c(cert, key).daemons().-----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- markers. The first one should be the newly-generated TLS certificate, the same supplied to daemons(), and the final one should be a CA root certificate.certchain, then the character vector comprising this and an empty character string c(certchain, "") may be supplied to ‘tlscert’.The .compute argument to daemons() creates separate, independent daemon pools (compute profiles) for heterogeneous compute requirements:
Pass a character string to .compute as the profile name (NULL defaults to ‘default’).
Settings save under this name.
Specify .compute in mirai() to use a profile (NULL uses ‘default’).
Other functions (info(), status(), launch_local(), launch_remote()) also accept .compute.
with_daemons() and local_daemons()with_daemons() or local_daemons() with a profile name sets the default for all functions within that scope:
daemons(1, .compute = "cpu")
daemons(1, .compute = "gpu")
with_daemons("cpu", {
m1 <- mirai(Sys.getpid())
})
with_daemons("gpu", {
m2 <- mirai(Sys.getpid())
m3 <- mirai(Sys.getpid(), .compute = "cpu")
local_daemons("cpu")
m4 <- mirai(Sys.getpid())
})
m1[]
#> [1] 42715
m2[] # different to m1
#> [1] 42730
m3[] # same as m1
#> [1] 42715
m4[] # same as m1
#> [1] 42715
with_daemons("cpu", daemons(0))
with_daemons("gpu", daemons(0))
The with() method creates daemons for an expression’s duration, then automatically resets them.
Functions within the scope use the daemons’ compute profile.
Designed for running Shiny apps with specific daemon counts:
with(daemons(4), shiny::runApp(app))
# Or:
with(daemons(4, .compute = "shiny"), shiny::runApp(app))
Note: The app must already be created. Don’t wrap
shiny::shinyApp()sincerunApp()is called when printed, afterwith()returns.
Shiny apps execute all mirai calls before returning (blocking). For other expressions, collect all mirai values to ensure completion before daemon reset.
mirai uses L’Ecuyer-CMRG streams (like base R’s parallel package) for statistically-sound parallel random number generation (RNG).
Streams divide the RNG sequence at far-apart intervals that don’t overlap, ensuring valid parallel results.
Default (seed = NULL): New stream per daemon (like base R):
Reproducible (seed = integer): New stream per mirai() call (not per daemon):
daemons(sync = TRUE) enables synchronous mode.
Mirai evaluate immediately without async operation, useful for testing and debugging with browser().
Restrict to a specific profile by specifying .compute.
Only seed affects behavior with sync = TRUE.
Example usage:
# run everything in sync:
daemons(sync = TRUE)
mp <- mirai_map(1:2, \(x) Sys.getpid())
daemons(0)
mp[]
#> [[1]]
#> [1] 5757
#>
#> [[2]]
#> [1] 5757
# Use sync with the 'sync' compute profile:
daemons(sync = TRUE, .compute = "sync")
with_daemons("sync", {
mp <- mirai_map(1:2, \(x) Sys.getpid())
})
daemons(0, .compute = "sync")
mp[]
#> [[1]]
#> [1] 5757
#>
#> [[2]]
#> [1] 5757