- 16 3月, 2017 27 次提交
-
-
由 Daniel P. Berrange 提交于
When providing explicit x509 cert/key paths in libvirtd.conf, the user must provide all three. If one or more is missed, this leads to obscure errors at runtime when negotiating the TLS session Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Linux still defaults to a 1024 open file handle limit. This causes scalability problems for libvirtd / virtlockd / virtlogd on large hosts which might want > 1024 guest to be running. In fact if each guest needs > 1 FD, we can't even get to 500 guests. This is not good enough when we see machines with 100's of physical cores and TBs of RAM. In comparison to other memory requirements of libvirtd & related daemons, the resource usage associated with open file handles is essentially line noise. It is thus reasonable to increase the limits unconditionally for all installs. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Guido Günther 提交于
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
There were couple of reports on the list (e.g. [1]) that guests with huge amounts of RAM are unable to start because libvirt kills qemu in the initialization phase. The problem is that if guest is configured to use hugepages kernel has to zero them all out before handing over to qemu process. For instance, 402GiB worth of 1GiB pages took around 105 seconds (~3.8GiB/s). Since we do not want to make the timeout for connecting to monitor configurable, we have to teach libvirt to count with this fact. This commit implements "1s per each 1GiB of RAM" approach as suggested here [2]. 1: https://www.redhat.com/archives/libvir-list/2017-March/msg00373.html 2: https://www.redhat.com/archives/libvir-list/2017-March/msg00405.htmlSigned-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
While connecting to qemu monitor, the first thing we do is wait for it to show up. However, we are doing it with some timeout to avoid indefinite waits (e.g. when qemu doesn't create the monitor socket at all). After beaa447a we are using exponential back off timeout meaning, after the first connection attempt we wait 1ms, then 2ms, then 4 and so on. This allows us to bring down wait time for small domains where qemu initializes quickly. However, on the other end of this scale are some domains with huge amounts of guest memory. Now imagine that we've gotten up to wait time of 15 seconds. The next one is going to be 30 seconds, and the one after that whole minute. Well, okay - with current code we are not going to wait longer than 30 seconds in total, but this is going to change in the next commit. The exponential back off is usable only for first few iterations. Then it needs to be caped (one second was chosen as the limit) and switch to constant wait time. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 John Ferlan 提交于
Fix a "bug" in the storage pool test driver code which "assumed" testStoragePoolObjSetDefaults should fill in the configFile for both the Define/Create (persistent) and CreateXML (transient) pools by just VIR_FREE()'ing it during CreateXML. Because the configFile was filled in, during Destroy the pool wouldn't be free'd which could cause issues for future patches which add tests to validate vHBA creation for the storage pool using the same name.
-
由 John Ferlan 提交于
Commit id 'bb74a7ff' added a fairly non specific message when providing only the <parent wwnn='xxx'/> or <parent wwpn='xxx'/> instead of providing both wwnn and wwpn. This patch just modifies the message to be more specific about which was missing.
-
由 John Ferlan 提交于
Rather than returning true/false and having the caller check if the vHBA was actually created, let's do that check within the CreateVport function. That way the caller can faithfully assume success based on a name start the thread looking for the LUNs. Prior to this change it's possible that the vHBA wasn't really created (e.g if the call to virVHBAGetHostByWWN returned NULL), we'd claim success, but in reality there'd be no vHBA for the pool. This also fixes a second yet seen issue that if the nodedev was present, but the parent by name wasn't provided (perhaps parent by wwnn/wwpn or by fabric_name), then a failure would be returned. For this path it shouldn't be an error - we should just be happy that something else is managing the device and we don't have to create/delete it. The end result is that the createVport code can now just start the refresh thread once it gets a non NULL name back. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Move the bulk of createVport and rename to virNodeDeviceCreateVport. Remove the deleteVport entirely and replace with virNodeDeviceDeleteVport Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
The function is actually in virutil.c, but prototyped in virfile.h. This patch fixes that by renaming the function to virWaitForDevices, adding the prototype in virutil.h and libvirt_private.syms, and then changing the callers to use the new name. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Move the virStoragePoolSourceAdapter from storage_conf.h and rename to virStorageAdapter. Continue with code realignment for brevity and flow. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rework the code to use the new FCHost specific adapter structures. Also rework the parameters to only pass what's need and leave logic in the caller for the adapter type and the need to call the helpers. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Use the FCHost and SCSIHost adapter specific typedefs Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types. Continue to realign the code for shorter lines. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types. Continue to realign the code for shorter lines. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Create typedef'd substructures and rework typedef to utilize. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rather than have lots of ugly inline code, create helpers to try and make things more readable. While creating the helpers realign the code as necessary. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rather than have lots of ugly inline code, create helpers to try and make things more readable. While creating the helpers realign the code as necessary. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rename the API's to remove the storage pool source pieces Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Move code from storage_conf into storage_adapter_conf Pure code motion Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rather than use virXPathString, pass along an virXPathNode and alter the parsing to use virXMLPropString. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Split out the code that munges through the storage pool adapter into helpers - it's about to be moved into it's own source file. This is purely code motion at this point. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Commit id 'bb74a7ff' added some new fields to search for a fchost by parent wwnn/wwpn or parent_fabric_name, but neglected to validate that the data within the fields was valid at parse time. This could lead to eventual failure at run time, so rather than have the failure then, let's validate now. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1428209 Commit id 'bb74a7ff' neglected to check that both the parent_wwnn parent_wwpn are in the XML if one or the other is similar to how the node device code checked (commit id '2b13361b'). If only one is provided, the "default" is to use a vHBA capable adapter (see commit id '78be2e8b'), so the vHBA could start, but perhaps not on the expected adapter. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Daniel P. Berrange 提交于
RFC 6331 documents a number of serious security weaknesses in the SASL DIGEST-MD5 mechanism. As such, libvirtd should not by using it as a default mechanism. GSSAPI is the only other viable SASL mechanism that can provide secure session encryption so enable that by defalt as the replacement. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 15 3月, 2017 13 次提交
-
-
由 Michal Privoznik 提交于
Some users might want to pass a blockdev or a chardev as a backend for NVDIMM. In fact, this is expected to be the mostly used configuration. Therefore libvirt should allow the device in devices CGroup then. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Now that we have APIs for relabel memdevs on hotplug, fill in the missing implementation in qemu hotplug code. The qemuSecurity wrappers might look like overkill for now, because qemu namespace code does not deal with the nvdimms yet. Nor does our cgroup code. But hey, there's cgroup_device_acl variable in qemu.conf. If users add their /dev/pmem* device in there, the device is allowed in cgroups and created in the namespace so they can successfully passthrough it to the domain. It doesn't look like overkill after all, does it? Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
These APIs will be used whenever we are hot (un-)plugging a memdev. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When domain is being started up, we ought to relabel the host side of NVDIMM so qemu has access to it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When domain is being started up, we ought to relabel the host side of NVDIMM so qemu has access to it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
For NVDIMM devices it is optionally possible to specify the size of internal storage for namespaces. Namespaces are a feature that allows users to partition the NVDIMM for different uses. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Now that NVDIMM has found its way into libvirt, users might want to fine tune some settings for each module separately. One such setting is 'share=on|off' for the memory-backend-file object. This setting - just like its name suggest already - enables sharing the nvdimm module with other applications. Under the hood it controls whether qemu mmaps() the file as MAP_PRIVATE or MAP_SHARED. Yet again, we have such config knob in domain XML, but it's just an attribute to numa <cell/>. This does not give fine enough tuning on per-memdevice basis so we need to have the attribute for each device too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
So, majority of the code is just ready as-is. Well, with one slight change: differentiate between dimm and nvdimm in places like device alias generation, generating the command line and so on. Speaking of the command line, we also need to append 'nvdimm=on' to the '-machine' argument so that the nvdimm feature is advertised in the ACPI tables properly. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Introduce a qemu capability for -device nvdimm. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
NVDIMM is new type of memory introduced into QEMU 2.6. The idea is that we have a Non-Volatile memory module that keeps the data persistent across domain reboots. At the domain XML level, we already have some representation of 'dimm' modules. Long story short, NVDIMM will utilize the existing <memory/> element that lives under <devices/> by adding a new attribute 'nvdimm' to the existing @model and introduce a new <path/> element for <source/> while reusing other fields. The resulting XML would appear as: <memory model='nvdimm'> <source> <path>/tmp/nvdimm</path> </source> <target> <size unit='KiB'>523264</size> <node>0</node> </target> <address type='dimm' slot='0'/> </memory> So far, this is just a XML parser/formatter extension. QEMU driver implementation is in the next commit. For more info on NVDIMM visit the following web page: http://pmem.io/Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Frankly, this function is one big mess. A lot of arguments, complicated behaviour. It's really surprising that arguments were in random order (input and output arguments were mixed together), the documentation was outdated, the description of return values was bogus. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-