- 11 10月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
The libvirtd daemon had a few crude system tap probes. Some of these were broken during the RPC rewrite. The new modular RPC code is structured in a way that allows much more effective tracing. Instead of trying to hook up the original probes, define a new set of probes for the RPC and event code. The master probes file is now src/probes.d. This contains probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add probes for the poll event loop. The src/dtrace2systemtap.pl script can convert the probes.d file into a libvirt_probes.stp file to make use from systemtap much simpler. The src/rpc/gensystemtap.pl script can generate a set of systemtap functions for translating RPC enum values into printable strings. This works for all RPC header enums (program, type, status, procedure) and also the authentication enum The PROBE macro will automatically generate a VIR_DEBUG statement, so any place with a PROBE can remove any existing manual DEBUG statements. * daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing * daemon/libvirtd.h: Remove probe macros * daemon/Makefile.am: Remove all probe buildings/install * daemon/remote.c: Update authentication probes * src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts to generate STP files * src/internal.h: Add probe macros * src/probes.d: Master list of probes * src/rpc/virnetclient.c, src/rpc/virnetserverclient.c, src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c, src/util/event_poll.c: Insert probe points, removing any DEBUG statements that duplicate the info
-
- 01 9月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
Commit 2c85644b attempted to fix a problem with tracking RPC messages from streams by doing - if (msg->header.type == VIR_NET_REPLY) { + if (msg->header.type == VIR_NET_REPLY || + (msg->header.type == VIR_NET_STREAM && + msg->header.status != VIR_NET_CONTINUE)) { client->nrequests--; In other words any stream packet, with status NET_OK or NET_ERROR would cause nrequests to be decremented. This is great if the packet from from a synchronous virStreamFinish or virStreamAbort API call, but wildly wrong if from a server initiated abort. The latter resulted in 'nrequests' being decremented below zero. This then causes all I/O for that client to be stopped. Instead of trying to infer whether we need to decrement the nrequests field, from the message type/status, introduce an explicit 'bool tracked' field to mark whether the virNetMessagePtr object is subject to tracking. Also add a virNetMessageClear function to allow a message contents to be cleared out, without adversely impacting the 'tracked' field as a naive memset() would do * src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add a 'bool tracked' field and virNetMessageClear() API * daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c, src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c, src/rpc/virnetserverprogram.c: Switch over to use virNetMessageClear() and pass in the 'bool tracked' value when creating messages.
-
- 17 8月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
Every active stream results in a reference being held on the virNetServerClientPtr object. This meant that if a client quit with any streams active, although all I/O was stopped the virNetServerClientPtr object would leak. This causes libvirtd to leak any file handles associated with open streams when a client quit To fix this, when we call virNetServerClientClose there is a callback invoked which lets the daemon release the streams and thus the extra references * daemon/remote.c: Add a hook to close all streams * daemon/stream.c, daemon/stream.h: Add API for releasing all streams * src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow registration of a hook to trigger when closing client
-
- 03 8月, 2011 1 次提交
-
-
由 Eric Blake 提交于
Detected by Coverity. * src/rpc/virnetserverclient.c (virNetServerClientDispatchRead): Avoid null deref on OOM.
-
- 02 8月, 2011 1 次提交
-
-
由 Eric Blake 提交于
Steps to reproduce this problem (vm1 is not running): for i in `seq 50`; do virsh managedsave vm1& done; killall virsh Pre-patch, virNetServerClientClose could end up setting client->sock to NULL prior to other cleanup functions trying to use client->sock. This fixes things by checking for NULL in more places, and by deferring the cleanup until after all queued messages have been served. * src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent) (virNetServerClientGetFD, virNetServerClientIsSecure) (virNetServerClientLocalAddrString) (virNetServerClientRemoteAddrString): Check for closed socket. (virNetServerClientClose): Rearrange close sequence. Analysis from Wen Congyang.
-
- 26 7月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
When an incoming RPC message is ready for processing, virNetServerClientDispatchRead() will invoke the 'dispatchFunc' callback. This is set to virNetServerDispatchNewMessage This function puts the message + client in a queue for processing by the thread pool. The thread pool worker function is virNetServerHandleJob The first thing this does is acquire an extra reference on the 'client'. Unfortunately, between the time the message+client are put on the thread pool queue, and the time the worker runs, the client object may have had its last reference removed. We clearly need to add the reference to the client object before putting the client on the processing queue * src/rpc/virnetserverclient.c: Add a reference to the client when invoking the dispatch function * src/rpc/virnetserver.c: Don't acquire a reference to the client when in the worker thread
-
- 19 7月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
When unregistering an I/O callback from a virNetSocket object, there is still a chance that an event may come in on the callback. In this case it is possible that the virNetSocket might have been freed already. Make use of a virFreeCallback when registering the I/O callbacks and hold a reference for the entire time the callback is set. * src/rpc/virnetsocket.c: Register a free function for the file handle watch * src/rpc/virnetsocket.h, src/rpc/virnetserverservice.c, src/rpc/virnetserverclient.c, src/rpc/virnetclient.c: Add a free function for the socket I/O watches
-
- 13 7月, 2011 1 次提交
-
-
由 Eric Blake 提交于
Continuation of commit 313ac7fd, and enforce things with a syntax check. Technically, virNetServerClientCalculateHandleMode is not printing a mode_t, but rather a collection of VIR_EVENT_HANDLE_* bits; however, these bits are < 8, so there is no different in the output, and that was the easiest way to silence the new syntax check. * cfg.mk (sc_flags_debug): New syntax check. (exclude_file_name_regexp--sc_flags_debug): Add exemptions. * src/fdstream.c (virFDStreamOpenFileInternal): Print flags in hex, mode_t in octal. * src/libvirt-qemu.c (virDomainQemuMonitorCommand) (virDomainQemuAttach): Likewise. * src/locking/lock_driver_nop.c (virLockManagerNopInit): Likewise. * src/locking/lock_driver_sanlock.c (virLockManagerSanlockInit): Likewise. * src/locking/lock_manager.c: Likewise. * src/qemu/qemu_migration.c: Likewise. * src/qemu/qemu_monitor.c: Likewise. * src/rpc/virnetserverclient.c (virNetServerClientCalculateHandleMode): Print mode with %o.
-
- 08 7月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
The dispatch for the CLOSE RPC call was invoking the method virNetServerClientClose(). This caused the client connection to be immediately terminated. This meant the reply to the final RPC message was never sent. Prior to the RPC rewrite we merely flagged the connection for closing, and actually closed it when the next RPC call dispatch had completed. * daemon/remote.c: Flag connection for a delayed close * daemon/stream.c: Update to use new API for closing failed connection * src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Add support for a delayed connection close. Rename the virNetServerClientMarkClose method to virNetServerClientImmediateClose to clarify its semantics
-
由 Daniel P. Berrange 提交于
When sending back the final OK or ERROR message on completion of a stream, we were not decrementing the 'nrequests' tracker on the client. With the default requests limit of '5', this meant once a client had created 5 streams, they are unable to process any further RPC calls. There was also a bug when handling an error from decoding a message length header, which meant a client connection would not immediately be closed. * src/rpc/virnetserverclient.c: Fix release of request after stream completion & mark client for close on error
-
- 04 7月, 2011 1 次提交
-
-
由 Eric Blake 提交于
Spotted by Coverity. If we don't update tmp each time through the loop, then if the filter being removed was not the head of the list, we accidentally lose all filters prior to the one we wanted to remove. * src/rpc/virnetserverclient.c (virNetServerClientRemoveFilter): Don't lose unrelated filters.
-
- 01 7月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
To save on memory reallocation, virNetMessage instances that have been transmitted, may be reused for a subsequent incoming message. We forgot to clear out the old data of the message fully, which caused later confusion upon read. * src/rpc/virnetserverclient.c: memset entire message before reusing it
-
由 Daniel P. Berrange 提交于
The virNetServerClient object had a hardcoded limit of 10 requests per client. Extend constructor to allow it to be passed in as a configurable variable. Wire this up to the 'max_client_requests' config parameter in libvirtd * daemon/libvirtd.c: Pass max_client_requests into services * src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Pass nrequests_client_max to clients * src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow configurable request limit
-
- 29 6月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
When a filter steals an RPC message, that message must not be freed, except by the filter code itself * src/rpc/virnetserverclient.c: Don't free stolen RPC messages
-
- 24 6月, 2011 1 次提交
-
-
由 Daniel P. Berrange 提交于
To facilitate creation of new daemons providing XDR RPC services, pull a lot of the libvirtd daemon code into a set of reusable objects. * virNetServer: A server contains one or more services which accept incoming clients. It maintains the list of active clients. It has a list of RPC programs which can be used by clients. When clients produce a complete RPC message, the server passes this onto the corresponding program for handling, and queues any response back with the client. * virNetServerClient: Encapsulates a single client connection. All I/O for the client is handled, reading & writing RPC messages. * virNetServerProgram: Handles processing and dispatch of RPC method calls for a single RPC (program,version). Multiple programs can be registered with the server. * virNetServerService: Encapsulates socket(s) listening for new connections. Each service listens on a single host/port, but may have multiple sockets if on a dual IPv4/6 host. Each new daemon now merely has to define the list of RPC procedures & their handlers. It does not need to deal with any network related functionality at all.
-