- 21 12月, 2012 2 次提交
-
-
由 Daniel P. Berrange 提交于
-
由 Daniel P. Berrange 提交于
-
- 02 11月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
The libvirt coding standard is to use 'function(...args...)' instead of 'function (...args...)'. A non-trivial number of places did not follow this rule and are fixed in this patch. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 21 9月, 2012 1 次提交
-
-
由 Eric Blake 提交于
https://www.gnu.org/licenses/gpl-howto.html recommends that the 'If not, see <url>.' phrase be a separate sentence. * tests/securityselinuxhelper.c: Remove doubled line. * tests/securityselinuxtest.c: Likewise. * globally: s/; If/. If/
-
- 10 9月, 2012 1 次提交
-
-
由 Christophe Fergeau 提交于
e5a1bee0 introduced a regression in Boxes: when Boxes is left idle (it's still doing some libvirt calls in the background), the libvirt connection gets closed after a few minutes. What happens is that this code in virNetClientIOHandleOutput gets triggered: if (!thecall) return -1; /* Shouldn't happen, but you never know... */ and after the changes in e5a1bee0, this causes the libvirt connection to be closed. Upon further investigation, what happens is that virNetClientIOHandleOutput is called from gvir_event_handle_dispatch in libvirt-glib, which is triggered because the client fd became writable. However, between the times gvir_event_handle_dispatch is called, and the time the client lock is grabbed and virNetClientIOHandleOutput is called, another thread runs and completes the current call. 'thecall' is then NULL when the first thread gets to run virNetClientIOHandleOutput. After describing this situation on IRC, danpb suggested this: 11:37 < danpb> In that case I think the correct thing would be to change 'return -1' above to 'return 0' since that's not actually an error - its a rare, but expected event which is what this patch is doing. I've tested it against master libvirt, and I didn't get disconnected in ~10 minutes while this happens in less than 5 minutes without this patch.
-
- 27 8月, 2012 1 次提交
-
-
由 Guannan Ren 提交于
The client-sock could have been set to NULL by eventloop thread after async event fired.
-
- 22 8月, 2012 1 次提交
-
-
由 Peter Krempa 提交于
Unfortunately libssh2 doesn't support all types of host keys that can be saved in the known_hosts file. Also it does not report that parsing of the file failed. This results into truncated known_hosts files where the standard client stores keys also in other formats (eg. ecdsa-sha2-nistp256). This patch changes the default location of the known_hosts file into the libvirt private configuration directory, where it will be only written by the libssh2 layer itself. This prevents trashing user's known_host file.
-
- 21 8月, 2012 1 次提交
-
-
由 Peter Krempa 提交于
This patch adds a glue layer to enable using libssh2 code with the network client code. As in the original client implementation, shell code is sent to the server to detect correct options for netcat and connect to libvirt's unix socket.
-
- 15 8月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
Currently the virNetClientPtr constructor will always register the async IO event handler and the keepalive objects. In the case of the lock manager, there will be no event loop available nor keepalive support required. Split this setup out of the constructor and into separate methods. The remote driver will enable async IO and keepalives, while the LXC driver will only enable async IO Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 09 8月, 2012 1 次提交
-
-
由 Peter Feiner 提交于
From man poll(2), poll does not set errno=EAGAIN on interrupt, however it does set errno=EINTR. Have libvirt retry on the appropriate errno. Under heavy load, a program of mine kept getting libvirt errors 'poll on socket failed: Interrupted system call'. The signals were SIGCHLD from processes forked by threads unrelated to those using libvirt.
-
- 07 8月, 2012 5 次提交
-
-
由 Daniel P. Berrange 提交于
Make all the virNetClient* objects use virObject APIs for reference counting Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Make virSocket use the virObject APIs for reference counting Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Make virKeepAlive use the virObject APIs for reference counting Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Make virNetSASLContext and virNetSASLSession use virObject APIs for reference counting Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Make virNetTLSContext and virNetTLSSession use the virObject APIs for reference counting Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 04 8月, 2012 1 次提交
-
-
由 Peter Krempa 提交于
The last message of the client was not freed leaking 4 bytes of memory in the client when the remote daemon crashed while processing a message.
-
- 30 7月, 2012 3 次提交
-
-
由 Daniel P. Berrange 提交于
In the socket event handler for the RPC client we must deal with read/write events, before checking for EOF, otherwise we might close the socket before we've read & acted upon the last RPC messages Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Allow detection of socket close in virNetClient via a callback function, triggered on any condition that causes the socket to be closed. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Currently if the keepalive timer triggers, the 'markClose' flag is set on the virNetClient. A controlled shutdown will then be performed. If an I/O error occurs during read or write of the connection an error is raised back to the caller, but the connection isn't marked for close. This patch ensures that all I/O error scenarios always result in the connection being marked for close. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 23 7月, 2012 1 次提交
-
-
由 Osier Yang 提交于
Per the FSF address could be changed from time to time, and GNU recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html) You should have received a copy of the GNU General Public License along with Foobar. If not, see <http://www.gnu.org/licenses/>. This patch removes the explicit FSF address, and uses above instead (of course, with inserting 'Lesser' before 'General'). Except a bunch of files for security driver, all others are changed automatically, the copyright for securify files are not complete, that's why to do it manually: src/security/security_selinux.h src/security/security_driver.h src/security/security_selinux.c src/security/security_apparmor.h src/security/security_apparmor.c src/security/security_driver.c
-
- 18 7月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
This rmoves all the per-file error reporting macros from the code in src/rpc/ Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 13 6月, 2012 10 次提交
-
-
由 Daniel P. Berrange 提交于
First 'poll' can't return EWOULDBLOCK, and second, we're checking errno so far away from the poll() call that we've probably already trashed the original errno value.
-
由 Jiri Denemark 提交于
In addition to keepalive responses, we also need to send keepalive requests from client IO loop to properly detect dead connection in case a libvirt API is called from the main loop, which prevents any timers to be called.
-
由 Jiri Denemark 提交于
The previous commit removed the only usage of ``all'' parameter in virKeepAliveStopInternal, which was actually the only reason for having virKeepAliveStopInternal. This effectively reverts most of commit 6446a9e2.
-
由 Jiri Denemark 提交于
When a libvirt API is called from the main event loop (which seems to be common in event-based glib apps), the client IO loop would properly handle keepalive requests sent by a server but will not actually send them because the main event loop is blocked with the API. This patch gets rid of response timer and the thread which is processing keepalive requests is also responsible for queueing responses for delivery.
-
由 Jiri Denemark 提交于
This makes it possible to create and queue new calls while we are running IO loop.
-
由 Jiri Denemark 提交于
As we never drop non-blocking calls, the return value that used to indicate a call was dropped is no longer needed.
-
由 Jiri Denemark 提交于
As non-blocking calls are no longer dropped, we don't really need to care that much about their fate and wait for the thread with the buck to process them. If another thread has the buck, we can just push a non-blocking call to the queue and be done with it.
-
由 Jiri Denemark 提交于
So far, we were dropping non-blocking calls whenever sending them would block. In case a client is sending lots of stream calls (which are not supposed to generate any reply), the assumption that having other calls in a queue is sufficient to get a reply from the server doesn't work. I tried to fix this in b1e374a7 but failed and reverted that commit. With this patch, non-blocking calls are never dropped (unless the connection is being closed) and will always be sent.
-
由 Jiri Denemark 提交于
Normally, when every call has a thread associated with it, the thread may get the buck and be in charge of sending all calls until its own call is done. When we introduced non-blocking calls, we had to add special handling of new non-blocking calls. This patch uses event loop to send data if there is no thread to get the buck so that any non-blocking calls left in the queue are properly sent without having to handle them specially. It also avoids adding even more cruft to client IO loop in the following patches. With this change in, non-blocking calls may see unpredictable delays in delivery when the client has no event loop registered. However, the only non-blocking calls we have are keepalives and we already require event loop for them, which makes this a non-issue until someone introduces new non-blocking calls.
-
由 Jiri Denemark 提交于
When analyzing our debug log, I'm always confused about what each of the pointers mean. Let's be explicit.
-
- 05 6月, 2012 1 次提交
-
-
由 Michal Privoznik 提交于
Currently, we are allocating buffer for RPC messages statically. This is not such pain when RPC limits are small. However, if we want ever to increase those limits, we need to allocate buffer dynamically, based on RPC message len (= the first 4 bytes). Therefore we will decrease our mem usage in most cases and still be flexible enough in corner cases.
-
- 23 5月, 2012 1 次提交
-
-
由 Jiri Denemark 提交于
This reverts commit b1e374a7, which was rather bad since I failed to consider all sides of the issue. The main things I didn't consider properly are: - a thread which sends a non-blocking call waits for the thread with the buck to process the call - the code doesn't expect non-blocking calls to remain in the queue unless they were already partially sent Thus, the reverted patch actually breaks more than what it fixes and clients (which may even be libvirtd during p2p migrations) will likely end up in a deadlock.
-
- 26 4月, 2012 2 次提交
-
-
由 Jiri Denemark 提交于
Currently, non-blocking calls are either sent immediately or discarded in case sending would block. This was implemented based on the assumption that the non-blocking keepalive call is not needed as there are other calls in the queue which would keep the connection alive. However, if those calls are no-reply calls (such as those carrying stream data), the remote party knows the connection is alive but since we don't get any reply from it, we think the connection is dead. This is most visible in tunnelled migration. If it happens to be longer than keepalive timeout (30s by default), it may be unexpectedly aborted because the connection is considered to be dead. With this patch, we only discard non-blocking calls when the last call with a thread is completed and thus there is no thread left to keep sending the remaining non-blocking calls.
-
由 Peter Krempa 提交于
The docs for virConnectSetKeepAlive() advertise that this function should be able to disable keepalives on negative or zero interval time. This patch removes the check that prohibited this and adds code to disable keepalives on negative/zero interval. * src/libvirt.c: virConnectSetKeepAlive(): - remove check for negative values * src/rpc/virnetclient.c * src/rpc/virnetclient.h: - add virNetClientKeepAliveStop() to disable keepalive messages * src/remote/remote_driver.c: remoteSetKeepAlive(): -add ability to disable keepalives
-
- 30 3月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
The code is splattered with a mix of sizeof foo sizeof (foo) sizeof(foo) Standardize on sizeof(foo) and add a syntax check rule to enforce it Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 05 3月, 2012 1 次提交
-
-
由 Jiri Denemark 提交于
A multi-threaded client with event loop may crash if one of its threads closes a connection while event loop is in the middle of sending keep-alive message (either request or response). The right place for it is inside virNetClientIOEventLoop() between poll() and virNetClientLock(). We should only close a connection directly if no-one is using it and defer the closing to the last user otherwise. So far we only did so if the close was initiated by keep-alive timeout.
-
- 12 1月, 2012 1 次提交
-
-
由 Michal Privoznik 提交于
If client stream does not have any data to sink and neither received EOF, a dummy packet is sent to the daemon signalising client is ready to sink some data. However, after we added event loop to client a race may occur: Thread 1 calls virNetClientStreamRecvPacket and since no data are cached nor stream has EOF, it decides to send dummy packet to server which will sent some data in turn. However, during this decision and actual message exchange with server - Thread 2 receives last stream data from server. Therefore an EOF is set on stream and if there is a call waiting (which is not yet) it is woken up. However, Thread 1 haven't sent anything so far, so there is no call to be woken up. So this thread sent dummy packet to daemon, which ignores that as no stream is associated with such packet and therefore no reply will ever come. This race causes client to hang indefinitely.
-
- 08 12月, 2011 2 次提交
-
-
由 Daniel P. Berrange 提交于
When one thread passes the buck to another thread, it uses virCondSignal to wake up the target thread. The variable 'haveTheBuck' is not updated in a race-free manner when this occurs. The current thread sets it to false, and the woken up thread sets it to true. There is a window where a 3rd thread can come in and grab the buck. Even if this didn't lead to crashes & deadlocks, this would still result in unfairness in the buckpassing algorithm. A better solution is to *never* set haveTheBuck to false when we're passing the buck. Only set it to false when there is no further thread waiting for the buck. * src/rpc/virnetclient.c: Only set haveTheBuck to false if no thread is waiting
-
由 Daniel P. Berrange 提交于
Commit fd066925 tried to fix a race condition in commit fa959500 Author: Daniel P. Berrange <berrange@redhat.com> Date: Fri Nov 11 15:28:41 2011 +0000 Explicitly track whether the buck is held in remote client Unfortunately there is a second race condition whereby the event loop can trigger due to incoming data to read. Revert this fix, so a complete fix for the problem can be cleanly applied * src/rpc/virnetclient.c: Revert fd066925
-