DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
Developing distributed applications using ONC RPC and XDR

Using RPC/XDR for other tasks

This section discusses some other aspects of RPC.

Using select on the server

Suppose a process is handling RPC requests while performing some other activity. If the other activity involves periodically updating a data structure, then the process can set an alarm signal before calling svc_run If, however, the other activity involves waiting for a file descriptor, the svc_run call will not work. The code for svc_run is:

   void
   svc_run()
   {
           int readfds;
   

for (;;) { readfds = svc_fds; switch (select(32, &readfds, NULL, NULL, NULL)) {

case -1: if (errno == EINTR) continue; perror("rstat: select"); return; case 0: break; default: svc_getreq(readfds); } } }

You can bypass svc_run and call svc_getreq directly. To do this, you need to know the file descriptors of the socket(s) associated with the programs for which you are waiting. Thus, you can write your own ``selects'' that wait on both the RPC socket and your own descriptors.

Using broadcast RPC calls

The pmap and RPC protocols implement broadcast RPC. Here are the main differences between broadcast RPC and normal RPC calls:

   #include <rpc/pmap_clnt.h>
   

enum clnt_stat clnt_stat;

clnt_stat = clnt_broadcast(prog, vers, proc, xargs, argsp, xresults, resultsp, eachresult); ulong prog; /* program number */ ulong vers; /* version number */ ulong proc; /* procedure number */ xdrproc_t xargs; /* xdr routine for args */ caddr_t argsp; /* pointer to args */ xdrproc_t xresults; /* xdr routine for results */ caddr_t resultsp; /* pointer to results */ bool_t (*eachresult)(); /* call with each result obtained */

The procedure eachresult is called each time a valid result is obtained. It returns a boolean that indicates whether or not the client wants more responses.
   bool_t               done;
   

done = eachresult(resultsp, raddr); caddr_t resultsp; struct sockaddr_in *raddr; /* address of machine that sent response*/

If done is TRUE, then broadcasting stops and clnt_broadcast returns successfully. Otherwise, the routine waits for another response. The request is rebroadcast after a few seconds of waiting. If no responses come back, the routine returns with RPC_TIMEDOUT. To interpret clnt_stat errors, feed the error code to clnt_perrno

Batching

The RPC architecture is designed so that clients send a call message and wait for servers to reply that the call succeeded. This implies that clients do not compute while servers are processing a call. This is inefficient if the client does not want or need an acknowledgement for every message sent. It is possible for clients to continue computing while waiting for a response, using RPC batch facilities.

RPC messages can be placed in a pipeline of calls to a desired server; this is called batching. Batching assumes the following:

Since the server does not respond to every call, the client can generate new calls in parallel with the server executing previous calls. Furthermore, the TCP/IP implementation can buffer up many call messages and send them to the server in one write system call. This overlapped execution greatly decreases the interprocess communication overhead of the client and server processes and the total elapsed time of a series of calls.

Since the batched calls are buffered, the client should eventually do a legitimate call to flush the pipeline.

A contrived example of batching follows. Assume a string-rendering service (like a window system) has two similar calls: one renders a string and returns void results, while the other renders a string and remains silent. The service (using the TCP/IP transport) may look like the following:

   #include <stdio.h>
   #include <rpc/rpc.h>
   #include <rpcsvc/windows.h>
   

void windowdispatch();

main() { SVCXPRT *transp;

transp = svctcp_create(RPC_ANYSOCK, 0, 0); if (transp == NULL){ fprintf(stderr, "could not create an RPC server\n"); exit(1); } pmap_unset(WINDOWPROG, WINDOWVERS); if (!svc_register(transp, WINDOWPROG, WINDOWVERS, windowdispatch, IPPROTO_TCP)) { fprintf(stderr, "could not register WINDOW service\n"); exit(1); } svc_run(); /* never returns */ fprintf(stderr, "should never reach this point\n"); }

   void
   windowdispatch(rqstp, transp)
       struct svc_req *rqstp;
       SVCXPRT *transp;
   {
       char *s = NULL;
   

switch (rqstp->rq_proc) { case NULLPROC: if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); } return; case RENDERSTRING: if (!svc_getargs(transp, xdr_wrapstring, &s)) { fprintf(stderr, "could not decode arguments\n"); svcerr_decode(transp); /* tell caller of mistake */ break; } /* * call here to to render the string s */ if (!svc_sendreply(transp, xdr_void, NULL)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); } break;

       case RENDERSTRING_BATCHED:
           if (!svc_getargs(transp, xdr_wrapstring, &s)) {
               fprintf(stderr, "could not decode arguments\n");
               /*
                * we are silent in the face of protocol errors
                */
               break;
           }
           /*
            * call here to to render the string s,
            * but sends no reply!
            */
           break;
       default:
           svcerr_noproc(transp);
           return;
       }
       /*
        * now free string allocated while decoding arguments
        */
       svc_freeargs(transp, xdr_wrapstring, &s);
   }
Of course the service could have one procedure that takes the string and a boolean to indicate whether or not the procedure should respond.

To take advantage of batching, the client must perform RPC calls on a TCP-based transport. The actual calls must have the following attributes:

Here is an example of a client that uses batching to render a bunch of strings; the batching is flushed when the client gets a null string:
   #include <stdio.h>
   #include <rpc/rpc.h>
   #include <rpcsvc/windows.h>
   #include <sys/socket.h>
   #include <sys/fs/nfs/time.h>
   #include <netdb.h>
   

main(argc, argv) int argc; char **argv; { struct hostent *hp; struct timeval pertry_timeout, total_timeout; struct sockaddr_in server_addr; int addrlen, sock = RPC_ANYSOCK; register CLIENT *client; enum clnt_stat clnt_stat; char buf[1000]; char *s = buf;

       /*
        */
       if ((client = clnttcp_create(&server_addr, WINDOWPROG,
           WINDOWVERS, &sock, 0, 0)) == NULL) {
           perror("clnttcp_create");
           exit(-1);
       }
       total_timeout.tv_sec = 0;
       total_timeout.tv_usec = 0;
       while (scanf("%s", s) != EOF) {
           clnt_stat = clnt_call(client, RENDERSTRING_BATCHED,
               xdr_wrapstring, &s, NULL, NULL, total_timeout);
           if (clnt_stat != RPC_SUCCESS) {
               clnt_perror(client, "batched rpc");
               exit(-1);
           }
       }
       /*
        * now flush the pipeline
        */
       total_timeout.tv_sec = 20;
       clnt_stat = clnt_call(client, NULLPROC,
           xdr_void, NULL, xdr_void, NULL, total_timeout);
       if (clnt_stat != RPC_SUCCESS) {
           clnt_perror(client, "rpc");
           exit(-1);
       }
   

clnt_destroy(client); }

Because the server sends no message, the clients cannot be notified of any failures that may occur. Therefore, clients are on their own when it comes to handling errors.

The above example was completed to render all of the (2000) lines in the file /etc/termcap. The rendering service did nothing but throw the lines away. The example was run in the following four configurations, with the results shown:

Configuration Timing (in seconds)
machine to itself, regular RPC 50
machine to itself, batched RPC 16
machine to another, regular RPC 52
machine to another, batched RPC 10
Running fscanf on /etc/termcap requires only six seconds. These timings show the advantage of protocols that allow for overlapped execution, although these protocols are often hard to design.

Using authentication

In the examples presented so far, the caller never identified itself to the server, and the server never required an ID from the caller. Clearly, some network services, such as a network filesystem, require stronger security measures than those presented so far. In reality, every RPC call is authenticated by the RPC package on the server and, similarly, the RPC client package generates and sends authentication parameters. Just as different transports (TCP/IP or UDP/IP) can be used when creating RPC clients and servers, different forms of authentication can be associated with RPC clients; the authentication type used as a default is type none.

The authentication subsystem of the RPC package is open-ended, that is, numerous types of authentication are easy to support. However, this section describes the only type of authentication (other than none) supported in SCO NFS.

The client side

When a caller creates a new RPC client handle as in:

   clnt = clntudp_create(address, prognum, versnum, wait, sockp)
the appropriate transport instance defaults the associate authentication handle to be:
   clnt->cl_auth = authnone_create();
The RPC client can choose to use authentication found in UNIX systems by setting clnt->cl_auth after creating the RPC client handle:
   clnt->cl_auth = authunix_create_default();
This causes each RPC call associated with clnt to carry with it the following authentication credentials structure:
   /*
    * UNIX type credentials.
    */
   struct authunix_parms {
   	ulong	aup_time;	/* credentials creation time */
   	char	*aup_machname;	/* host name of client machine */
   	int	aup_uid;	/* client's UNIX effective uid */
   	int	aup_gid;	/* client's current UNIX group id */
   	uint	aup_len;	/* the element length of aup_gids array */
   	int	*aup_gids;	/* array of groups to which user belongs */
   };
These fields are set by authunix_create_default by invoking the appropriate system calls.

Since the RPC user created this new style of authentication, the user is responsible for destroying it with:

   auth_destroy(clnt->cl_auth);
This should be done in all cases to conserve memory.

The server side

The RPC package passes the service dispatch routine a request that has an arbitrary authentication style associated with it. This creates difficulty for the service implementors dealing with authentication issues. For example, consider the fields of a request handle passed to a service dispatch routine:

   /*
    * An RPC service request
    */
   struct svc_req {
           ulong        rq_prog;      /* service program number */
           ulong        rq_vers;      /* service protocol version number*/
           ulong        rq_proc;      /* the desired procedure number*/
           struct opaque_auth rq_cred; /* raw credentials from the "wire" */
           caddr_t       rq_clntcred;  /* read only, cooked credentials */
   };
The rq_cred is mostly opaque, except for one field of interest: the style of authentication credentials:
   /*
    * Authentication info.  Mostly opaque to the programmer.
    */
   struct opaque_auth {
       enum_t    oa_flavor;     /* style of credentials */
       caddr_t   oa_base;       /* address of more auth stuff */
       uint     oa_length;     /* not to exceed MAX_AUTH_BYTES */
   };
The RPC package guarantees the following to the service dispatch routine:
The remote users service example can be extended so that it computes results for all users except UID 16:
   nuser(rqstp, transp)
       struct svc_req *rqstp;
       SVCXPRT *transp;
   {
       struct authunix_parms *unix_cred;
       int uid;
       unsigned long nusers;
   

/* * we do not care about authentication for the null procedure */ if (rqstp->rq_proc == NULLPROC) { if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); } return; }

       /*
        * now get the uid
        */
       switch (rqstp->rq_cred.oa_flavor) {
       case AUTH_UNIX:
           unix_cred = (struct authunix_parms *) rqstp->rq_clntcred;
           uid = unix_cred->aup_uid;
           break;
       case AUTH_NULL:
       default:
           svcerr_weakauth(transp);
           return;
       }
       switch (rqstp->rq_proc) {
       case RUSERSPROC_NUM:
           /*
            * make sure the caller is allowed to call this procedure.
            */
           if (uid == 16) {
               svcerr_systemerr(transp);
               return;
           }
           /*
            * code here to compute the number of users
            * and put in variable nusers
            */
           if (!svc_sendreply(transp, xdr_u_long, &nusers) {
               fprintf(stderr, "could not reply to RPC call\n");
               exit(1);
           }
           return;
       default:
           svcerr_noproc(transp);
           return;
       }
   }
Note the following: The last point underscores the relation between the RPC authentication package and the services; RPC deals only with authentication and not with individual services' access control. The services themselves must implement their own access-control policies and reflect these policies as return status in their protocols.

Supporting multiple program versions

By convention, the first version number of program FOO is FOOVERS_ORIG, and the most recent version is FOOVERS. Suppose there is a new version of the user program that returns an unsigned short rather than a long. If we name this version RUSERSVERS_SHORT, then a server that wants to support both versions would use a double register.

   if (!svc_register(transp, RUSERSPROG, RUSERSVERS_ORIG, nuser,
       IPPROTO_TCP)) {
           fprintf(stderr, "could not register RUSER service\n");
           exit(1);
   }
   if (!svc_register(transp, RUSERSPROG, RUSERSVERS_SHORT, nuser,
       IPPROTO_TCP)) {
           fprintf(stderr, "could not register RUSER service\n");
           exit(1);
   }
Both versions can be handled by the same C procedure:
   nuser(rqstp, transp)
       struct svc_req *rqstp;
       SVCXPRT *transp;
   {
       unsigned long nusers;
       unsigned short nusers2;
   

switch (rqstp->rq_proc) { case NULLPROC: if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); } return; case RUSERSPROC_NUM: /* * code here to compute the number of users * and put in variable nusers */ nusers2 = nusers; if (rqstp->rq_vers == RUSERSVERS_ORIG) if (!svc_sendreply(transp, xdr_u_long, &nusers)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); } else if (!svc_sendreply(transp, xdr_u_short, &nusers2)) { fprintf(stderr, "could not reply to RPC call\n"); exit(1); return; default: svcerr_noproc(transp); return; } }

Using different serialization and deserialization

Here is an example that is essentially equivalent to the rcp(TC) command. The initiator of the RPC snd call takes its standard input and sends it to the server rcv, which prints it on standard output. The RPC call uses TCP. This also illustrates an XDR procedure that behaves differently on serialization from the way it does on deserialization.

The XDR routine

   /*
    * The xdr routine:
    *
    * on decode, read from wire, write onto fp
    * on encode, read from fp, write onto wire
    */
   #include <stdio.h>
   #include <rpc/rpc.h>
   

xdr_rcp(xdrs, fp) XDR *xdrs; FILE *fp; { unsigned long size; char buf[MAXCHUNK], *p;

if (xdrs->x_op == XDR_FREE)/* nothing to free */ return 1; while (1) { if (xdrs->x_op == XDR_ENCODE) { if ((size = fread (buf, sizeof(char), MAXCHUNK, fp)) == 0 && ferror(fp)) { fprintf(stderr, "could not fread\n"); exit(1); } } p = buf; if (!xdr_bytes(xdrs, &p, &size, MAXCHUNK)) return(0); if (size == 0) return(1); if (xdrs->x_op == XDR_DECODE) { if (fwrite(buf, sizeof(char), size, fp) != size) { fprintf(stderr, "could not fwrite\n"); exit(1); } } } }

The sender routines

   /*
    * The sender routines
    */
   #include <stdio.h>
   #include <netdb.h>
   #include <rpc/rpc.h>
   #include <sys/socket.h>
   #include <sys/fs/nfs/time.h>
   

main(argc, argv) int argc; char **argv; { int err;

if (argc < 2) { fprintf(stderr, "usage: %s server-name\n", argv[0]); exit(-1); } if ((err = callrpctcp(argv[1], RCPPROG, RCPPROC_FP, RCPVERS, xdr_rcp, stdin, xdr_void, 0)) != 0) { clnt_perrno(err); fprintf(stderr, " could not make RPC call\n"); exit(1); } }

   callrpctcp(host, prognum, procnum, versnum, inproc, in, outproc, out)
       char *host, *in, *out;
       xdrproc_t inproc, outproc;
   {
       struct sockaddr_in server_addr;
       int socket = RPC_ANYSOCK;
       enum clnt_stat clnt_stat;
       struct hostent *hp;
       register CLIENT *client;
       struct timeval total_timeout;
   

if ((hp = gethostbyname(host)) == NULL) { fprintf(stderr, "cannot get addr for '%s'\n", host); exit(-1); } bcopy(hp->h_addr, (caddr_t)&server_addr.sin_addr, hp->h_length); server_addr.sin_family = AF_INET; server_addr.sin_port = 0; if ((client = clnttcp_create(&server_addr, prognum, versnum, &socket, BUFSIZ, BUFSIZ)) == NULL) { perror("rpctcp_create"); exit(-1); } total_timeout.tv_sec = 20; total_timeout.tv_usec = 0; clnt_stat = clnt_call(client, procnum, inproc, in, outproc, out, total_timeout); clnt_destroy(client); return ((int)clnt_stat); }

The receiving routines

   #include <stdio.h>
   #include <rpc/rpc.h>
   

main() { register SVCXPRT *transp;

if ((transp = svctcp_create(RPC_ANYSOCK, 1024, 1024)) == NULL) { fprintf("svctcp_create: error\n"); exit(1); } pmap_unset(RCPPROG, RCPVERS); if (!svc_register(transp, RCPPROG, RCPVERS, rcp_service, IPPROTO_TCP)) { fprintf(stderr, "svc_register: error\n"); exit(1); } svc_run(); /* never returns */ fprintf(stderr, "svc_run should never return\n"); } rcp_service(rqstp, transp) register struct svc_req *rqstp; register SVCXPRT *transp; { switch (rqstp->rq_proc) { case NULLPROC: if (svc_sendreply(transp, xdr_void, 0) == 0) { fprintf(stderr, "err: rcp_service"); exit(1); } return; case RCPPROC_FP: if (!svc_getargs(transp, xdr_rcp, stdout)) { svcerr_decode(transp); return; } if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "cannot reply\n"); return; } exit(0); default: svcerr_noproc(transp); return; } }

Using callback procedures

Occasionally, it is useful to have a server become a client and make an RPC call back to the process that is its client. An example is remote debugging, where the client is a window system program and the server is a debugger running on the remote machine. Most of the time, the user clicks a mouse button at the debugging window, which converts this to a debugger command and then makes an RPC call to the server (where the debugger is actually running), telling it to execute that command. However, when the debugger hits a breakpoint, the roles are reversed, and the debugger wants to make an RPC call to the window program, so that it can inform the user that a breakpoint has been reached.

In order to do an RPC callback, you need a program number to make the RPC call. Since this will be a dynamically generated program number, it should be in the transient range, 0x40000000 - 0x5fffffff. The routine gettransient returns a valid program number in the transient range and registers it with the portmapper. It talks only to the portmapper running on the same machine as the gettransient routine itself. The call to pmap_set is a test and set operation, in that it tests atomically whether a program number has already been registered and, if it has not, reserves it. On return, the sockp argument will contain a socket that can be used as the argument to an svcudp_create or svctcp_create call.

   #include <stdio.h>
   #include <rpc/rpc.h>
   #include <sys/socket.h>
   

gettransient(proto, vers, sockp) int *sockp; { static int prognum = 0x40000000; int s, len, socktype; struct sockaddr_in addr;

switch(proto) { case IPPROTO_UDP: socktype = SOCK_DGRAM; break; case IPPROTO_TCP: socktype = SOCK_STREAM; break; default: fprintf(stderr, "unknown protocol type\n"); return 0; } if (*sockp == RPC_ANYSOCK) { if ((s = socket(AF_INET, socktype, 0)) < 0) { perror("socket"); return (0); } *sockp = s; } else s = *sockp; addr.sin_addr.s_addr = 0; addr.sin_family = AF_INET; addr.sin_port = 0; len = sizeof(addr); /* * may be already bound, so do not check for error */ (void) bind(s, &addr, len); if (getsockname(s, &addr, &len)< 0) { perror("getsockname"); return (0); } while (pmap_set(prognum++, vers, proto, ntohs(addr.sin_port)) == 0) continue; return (prognum-1); }

The following pair of programs illustrate how to use the gettransient routine. The client makes an RPC call to the server, passing it a transient program number. The client waits to receive a callback from the server at that program number. The server registers the program EXAMPLEPROG, so that it can receive the RPC call informing it of the callback program number. Then at some random time (on receiving an ALRM signal in this example), it sends a callback RPC call, using the program number it received earlier.

Client program

   /*
    * client
    */
   #include <stdio.h>
   #include <rpc/rpc.h>
   

int callback(); char hostname[256];

main(argc, argv) char **argv; { int x, ans, s; SVCXPRT *xprt;

gethostname(hostname, sizeof(hostname)); s = RPC_ANYSOCK; x = gettransient(IPPROTO_UDP, 1, &s); fprintf(stderr, "client gets prognum %d\n", x);

if ((xprt = svcudp_create(s)) == NULL) { fprintf(stderr, "rpc_server: svcudp_create\n"); exit(1); } (void)svc_register(xprt, x, 1, callback, 0);

ans = callrpc(hostname, EXAMPLEPROG, EXAMPLEPROC_CALLBACK, EXAMPLEVERS, xdr_int, &x, xdr_void, 0); if (ans != 0) { fprintf(stderr, "call: "); clnt_perrno(ans); fprintf(stderr, "\n"); } svc_run(); fprintf(stderr, "Error: svc_run should not have returned\n"); }

   callback(rqstp, transp)
           register struct svc_req *rqstp;
           register SVCXPRT *transp;
   {
           switch (rqstp->rq_proc) {
                   case 0:
                           if (!svc_sendreply(transp, xdr_void, 0)) {
                                   fprintf(stderr, "err: rusersd\n");
                                   exit(1);
                               }
                           exit(0);
                   case 1:
                           if (!svc_getargs(transp, xdr_void, 0)) {
                                       svcerr_decode(transp);
                                   exit(1);
                           }
                           fprintf(stderr, "client got callback\n");
                           if (!svc_sendreply(transp, xdr_void, 0)) {
                                   fprintf(stderr, "err: rusersd");
                                   exit(1);
                           }
           }
   }

Server program

   /*
    * server
    */
   #include <stdio.h>
   #include <rpc/rpc.h>
   #include <sys/signal.h>
   

char *getnewprog(); char hostname[256]; int docallback(); int pnum; /*program number for callback routine */

main(argc, argv) char **argv; { gethostname(hostname, sizeof(hostname)); registerrpc(EXAMPLEPROG, EXAMPLEPROC_CALLBACK, EXAMPLEVERS, getnewprog, xdr_int, xdr_void); fprintf(stderr, "server going into svc_run\n"); alarm(10); signal(SIGALRM, docallback); svc_run(); fprintf(stderr, "Error: svc_run should not have returned\n"); }

   char *
   getnewprog(pnump)
           char *pnump;
   {
           pnum = *(int *)pnump;
           return NULL;
   }
   

docallback() { int ans;

ans = callrpc(hostname, pnum, 1, 1, xdr_void, 0, xdr_void, 0); if (ans != 0) { fprintf(stderr, "server: "); clnt_perrno(ans); fprintf(stderr, "\n"); } }


Next topic: Using the XDR protocol
Previous topic: Sample client program

© 2003 Caldera International, Inc. All rights reserved.
SCO OpenServer Release 5.0.7 -- 11 February 2003