Classic
- rpyc.utils.classic.connect_channel(channel)[source]
Creates an RPyC connection over the given
channel
- Parameters:
channel – the
rpyc.core.channel.Channel
instance- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_stream(stream)[source]
Creates an RPyC connection over the given stream
- Parameters:
channel – the
rpyc.core.stream.Stream
instance- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_stdpipes()[source]
Creates an RPyC connection over the standard pipes (
stdin
andstdout
)- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_pipes(input, output)[source]
Creates an RPyC connection over two pipes
- Parameters:
input – the input pipe
output – the output pipe
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect(host, port=18812, ipv6=False, keepalive=False)[source]
Creates a socket connection to the given host and port.
- Parameters:
host – the host to connect to
port – the TCP port
ipv6 – whether to create an IPv6 socket or IPv4
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.unix_connect(path)[source]
Creates a socket connection to the given host and port.
- Parameters:
path – the path to the unix domain socket
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.ssl_connect(host, port=18821, keyfile=None, certfile=None, ca_certs=None, cert_reqs=None, ssl_version=None, ciphers=None, ipv6=False)[source]
Creates a secure (
SSL
) socket connection to the given host and port, authenticating with the given certfile and CA file.- Parameters:
host – the host to connect to
port – the TCP port to use
ipv6 – whether to create an IPv6 socket or an IPv4 one
The following arguments are passed to ssl.SSLContext and its corresponding methods:
- Parameters:
keyfile – see
ssl.SSLContext.load_cert_chain
. May beNone
certfile – see
ssl.SSLContext.load_cert_chain
. May beNone
ca_certs – see
ssl.SSLContext.load_verify_locations
. May beNone
cert_reqs – see
ssl.SSLContext.verify_mode
. By default, ifca_cert
is specified, the requirement is set toCERT_REQUIRED
; otherwise it is set toCERT_NONE
ssl_version – see
ssl.SSLContext
. The default is defined byssl.create_default_context
ciphers – see
ssl.SSLContext.set_ciphers
. May beNone
. New in Python 2.7/3.2
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.ssh_connect(remote_machine, remote_port)[source]
Connects to the remote server over an SSH tunnel. See
rpyc.utils.factory.ssh_connect()
for more info.- Parameters:
remote_machine – the
plumbum.remote.RemoteMachine
instanceremote_port – the remote TCP port
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_subproc(server_file=None)[source]
Runs an RPyC classic server as a subprocess and returns an RPyC connection to it over stdio
- Parameters:
server_file – The full path to the server script (
rpyc_classic.py
). If not given,which rpyc_classic.py
will be attempted.- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_thread()[source]
Starts a SlaveService on a thread and connects to it. Useful for testing purposes. See
rpyc.utils.factory.connect_thread()
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.connect_multiprocess(args={})[source]
Starts a SlaveService on a multiprocess process and connects to it. Useful for testing purposes and running multicore code that’s uses shared memory. See
rpyc.utils.factory.connect_multiprocess()
- Returns:
an RPyC connection exposing
SlaveService
- rpyc.utils.classic.upload(conn, localpath, remotepath, filter=None, ignore_invalid=False, chunk_size=64000)[source]
uploads a file or a directory to the given remote path
- Parameters:
localpath – the local file or directory
remotepath – the remote path
filter – a predicate that accepts the filename and determines whether it should be uploaded; None means any file
chunk_size – the IO chunk size
- rpyc.utils.classic.download(conn, remotepath, localpath, filter=None, ignore_invalid=False, chunk_size=64000)[source]
download a file or a directory to the given remote path
- Parameters:
localpath – the local file or directory
remotepath – the remote path
filter – a predicate that accepts the filename and determines whether it should be downloaded; None means any file
chunk_size – the IO chunk size
- rpyc.utils.classic.upload_package(conn, module, remotepath=None, chunk_size=64000)[source]
uploads a module or a package to the remote party
- Parameters:
conn – the RPyC connection to use
module – the local module/package object to upload
remotepath – the remote path (if
None
, will default to the remote system’s python library (as reported bydistutils
)chunk_size – the IO chunk size
Note
upload_module
is just an alias toupload_package
example:
import foo.bar ... rpyc.classic.upload_package(conn, foo.bar)
- rpyc.utils.classic.upload_module(conn, module, remotepath=None, chunk_size=64000)
uploads a module or a package to the remote party
- Parameters:
conn – the RPyC connection to use
module – the local module/package object to upload
remotepath – the remote path (if
None
, will default to the remote system’s python library (as reported bydistutils
)chunk_size – the IO chunk size
Note
upload_module
is just an alias toupload_package
example:
import foo.bar ... rpyc.classic.upload_package(conn, foo.bar)
- rpyc.utils.classic.obtain(proxy)[source]
obtains (copies) a remote object from a proxy object. the object is
pickled
on the remote side andunpickled
locally, thus moved by value. changes made to the local object will not reflect remotely.- Parameters:
proxy – an RPyC proxy object
Note
the remote object to must be
pickle
-able- Returns:
a copy of the remote object
- rpyc.utils.classic.deliver(conn, localobj)[source]
delivers (recreates) a local object on the other party. the object is
pickled
locally andunpickled
on the remote side, thus moved by value. changes made to the remote object will not reflect locally.- Parameters:
conn – the RPyC connection
localobj – the local object to deliver
Note
the object must be
picklable
- Returns:
a proxy to the remote object
- rpyc.utils.classic.redirected_stdio(conn)[source]
Redirects the other party’s
stdin
,stdout
andstderr
to those of the local party, so remote IO will occur locally.Example usage:
with redirected_stdio(conn): conn.modules.sys.stdout.write("hello\n") # will be printed locally
- rpyc.utils.classic.pm(conn)[source]
same as
pdb.pm()
but on a remote exception- Parameters:
conn – the RPyC connection
- rpyc.utils.classic.interact(conn, namespace=None)[source]
remote interactive interpreter
- Parameters:
conn – the RPyC connection
namespace – the namespace to use (a
dict
)
- class rpyc.utils.classic.MockClassicConnection[source]
Mock classic RPyC connection object. Useful when you want the same code to run remotely or locally.
- rpyc.utils.classic.teleport_function(conn, func, globals=None, def_=True)[source]
“Teleports” a function (including nested functions/closures) over the RPyC connection. The function is passed in bytecode form and reconstructed on the other side.
The function cannot have non-brinable defaults (e.g.,
def f(x, y=[8]):
, since alist
isn’t brinable), or make use of non-builtin globals (like modules). You can overcome the second restriction by moving the necessary imports into the function body, e.g.def f(x, y): import os return (os.getpid() + y) * x
Note
While it is not forbidden to “teleport” functions across different Python versions, it may result in errors due to Python bytecode differences. It is recommended to ensure both the client and the server are of the same Python version when using this function.
- Parameters:
conn – the RPyC connection
func – the function object to be delivered to the other party
Helpers
Helpers and wrappers for common RPyC tasks
- rpyc.utils.helpers.buffiter(obj, chunk=10, max_chunk=1000, factor=2)[source]
Buffered iterator - reads the remote iterator in chunks starting with chunk, multiplying the chunk size by factor every time, as an exponential-backoff, up to a chunk of max_chunk size.
buffiter
is very useful for tight loops, where you fetch an element from the other side with every iterator. Instead of being limited by the network’s latency after every iteration,buffiter
fetches a “chunk” of elements every time, reducing the amount of network I/Os.- Parameters:
obj – An iterable object (supports
iter()
)chunk – the initial chunk size
max_chunk – the maximal chunk size
factor – the factor by which to multiply the chunk size after every iterator (up to max_chunk). Must be >= 1.
- Returns:
an iterator
Example:
cursor = db.get_cursor() for id, name, dob in buffiter(cursor.select("Id", "Name", "DoB")): print id, name, dob
- rpyc.utils.helpers.restricted(obj, attrs, wattrs=None)[source]
Returns a ‘restricted’ version of an object, i.e., allowing access only to a subset of its attributes. This is useful when returning a “broad” or “dangerous” object, where you don’t want the other party to have access to all of its attributes.
New in version 3.2.
- Parameters:
obj – any object
attrs – the set of attributes exposed for reading (
getattr
) or writing (setattr
). The same set will serve both for reading and writing, unless wattrs is explicitly given.wattrs – the set of attributes exposed for writing (
setattr
). IfNone
,wattrs
will default toattrs
. To disable setting attributes completely, set to an empty tuple()
.
- Returns:
a restricted view of the object
Example:
class MyService(rpyc.Service): def exposed_open(self, filename): f = open(filename, "r") return rpyc.restricted(f, {"read", "close"}) # disallow access to `seek` or `write`
- rpyc.utils.helpers.async_(proxy)[source]
Creates an async proxy wrapper over an existing proxy. Async proxies are cached. Invoking an async proxy will return an AsyncResult instead of blocking
- class rpyc.utils.helpers.timed(proxy, timeout)[source]
Creates a timed asynchronous proxy. Invoking the timed proxy will run in the background and will raise an
rpyc.core.async_.AsyncResultTimeout
exception if the computation does not terminate within the given time frame- Parameters:
proxy – any callable RPyC proxy
timeout – the maximal number of seconds to allow the operation to run
- Returns:
a
timed
wrapped proxy
Example:
t_sleep = rpyc.timed(conn.modules.time.sleep, 6) # allow up to 6 seconds t_sleep(4) # okay t_sleep(8) # will time out and raise AsyncResultTimeout
- class rpyc.utils.helpers.BgServingThread(conn, callback=None, serve_interval=0.0, sleep_interval=0.1)[source]
Runs an RPyC server in the background to serve all requests and replies that arrive on the given RPyC connection. The thread is started upon the the instantiation of the
BgServingThread
object; you can use thestop()
method to stop the server thread.CAVEAT: RPyC defaults to bind_threads as False. So, there is no guarantee that the background thread will serve the request. See issue #522 for an example of this behavior. As the bind_threads feature matures, we may change the default to to True in the future.
Example:
conn = rpyc.connect(...) bg_server = BgServingThread(conn) ... bg_server.stop()
Note
For a more detailed explanation of asynchronous operation and the role of the
BgServingThread
, see Part 5: Asynchronous Operation and Events
- rpyc.utils.helpers.async(proxy)
Creates an async proxy wrapper over an existing proxy. Async proxies are cached. Invoking an async proxy will return an AsyncResult instead of blocking