提交 8bc03283 编写于 作者: XuanDai's avatar XuanDai

Merge branch 'dev' into 'master'

add ceph code

See merge request !1

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
Maintainer
----------
Sage Weil <sage@redhat.com>
Component Technical Leads
-------------------------
For a full list of CTLs and maintainers visit: http://ceph.com/team/
Contributors
------------
For a complete contributor list:
git shortlog -sn
For more friendly contributor stats, see:
http://metrics.ceph.com
此差异已折叠。
For the general process of submitting patches to ceph, read the below
`Submitting Patches`_
For documentation patches the following guide will help you get started
`Documenting Ceph`_
Performance enhancements must come with test data and detailed
explanations.
Code cleanup is appreciated along with a patch that fixes a bug or
implements a feature. Except on rare occasions, code cleanup that only
involve coding style or whitespace modifications are discouraged,
primarily because they cause problems when rebasing and backporting.
.. _Submitting Patches: SubmittingPatches.rst
.. _Documenting Ceph: doc/start/documenting-ceph.rst
Format-Specification: http://anonscm.debian.org/viewvc/dep/web/deps/dep5/copyright-format.xml?revision=279&view=markup
Name: ceph
Maintainer: Sage Weil <sage@newdream.net>
Source: http://ceph.com/
Files: *
Copyright: (c) 2004-2010 by Sage Weil <sage@newdream.net>
License: LGPL-2.1 or LGPL-3 (see COPYING-LGPL2.1 and COPYING-LGPL3)
Files: cmake/modules/FindLTTngUST.cmake
Copyright:
Copyright 2016 Kitware, Inc.
Copyright 2016 Philippe Proulx <pproulx@efficios.com>
License: BSD 3-clause
Files: doc/*
Copyright: (c) 2010-2012 New Dream Network and contributors
License: Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
Files: bin/git-archive-all.sh
License: GPL3
Files: src/mount/canonicalize.c
Copyright: Copyright (C) 1993 Rick Sladkey <jrs@world.std.com>
License: LGPL-2 or later
Files: src/os/btrfs_ioctl.h
Copyright: Copyright (C) 2007 Oracle. All rights reserved.
License: GPL2 (see COPYING-GPL2)
Files: src/include/ceph_hash.cc
Copyright: None
License: Public domain
Files: src/common/bloom_filter.hpp
Copyright: Copyright (C) 2000 Arash Partow <arash@partow.net>
License: Boost Software License, Version 1.0
Files: src/common/crc32c_intel*:
Copyright:
Copyright 2012-2013 Intel Corporation All Rights Reserved.
License: BSD 3-clause
Files: src/common/deleter.h
Copyright:
Copyright (C) 2014 Cloudius Systems, Ltd.
License:
Apache-2.0
Files: src/common/sctp_crc32.c:
Copyright:
Copyright (c) 2001-2007, by Cisco Systems, Inc. All rights reserved.
Copyright (c) 2004-2006 Intel Corporation - All Rights Reserved
License:
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
a) Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b) Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the distribution.
c) Neither the name of Cisco Systems, Inc. nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Files: src/common/sstring.hh
Copyright:
Copyright 2014 Cloudius Systems
License:
Apache-2.0
Files: src/include/cpp-btree
Copyright:
Copyright 2013 Google Inc. All Rights Reserved.
License:
Apache-2.0
Files: src/json_spirit
Copyright:
Copyright John W. Wilkinson 2007 - 2011
License:
The MIT License
Copyright (c) 2007 - 2010 John W. Wilkinson
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Files: src/test/common/Throttle.cc src/test/filestore/chain_xattr.cc
Copyright: Copyright (C) 2013 Cloudwatt <libre.licensing@cloudwatt.com>
License: LGPL-2.1 or later
Files: src/osd/ErasureCodePluginJerasure/*.{c,h}
Copyright: Copyright (c) 2011, James S. Plank <plank@cs.utk.edu>
License:
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
- Neither the name of the University of Tennessee nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Packaging:
Copyright (C) 2004-2009 by Sage Weil <sage@newdream.net>
Copyright (C) 2010 Canonical, Ltd.
Licensed under LGPL-2.1 or LGPL-3.0
Files: src/test/perf_local.cc
Copyright:
(c) 2011-2014 Stanford University
(c) 2011 Facebook
License:
The MIT License
File: qa/workunits/erasure-code/jquery.js
Copyright 2012 jQuery Foundation and other contributors
Released under the MIT license
http://jquery.org/license
Files: qa/workunits/erasure-code/jquery.{flot.categories,flot}.js
Copyright (c) 2007-2014 IOLA and Ole Laursen.
Licensed under the MIT license.
Files: src/include/timegm.h
Copyright (C) Copyright Howard Hinnant
Copyright (C) Copyright 2010-2011 Vicente J. Botet Escriba
License: Boost Software License, Version 1.0
Files: src/pybind/mgr/diskprediction_local/models/*
Copyright: None
License: Public domain
Files: src/ceph-volume/plugin/zfs/*
Copyright: 2018, Willem Jan Withagen
License: BSD 3-clause
Files: src/include/function2.hpp
Copyright: 2015-2018, Denis Blank
License: Boost Software License, Version 1.0
Files: src/include/expected.hpp
Copyright: 2017, Simon Brand
License: CC0
Files: src/include/uses_allocator.h
Copyright: 2016, Pablo Halpern <phalpern@halpernwightsoftware.com>
License: Boost Software License, Version 1.0
Files: src/common/async/bind_allocator.h
Copyright: 2020 Red Hat <contact@redhat.com>
2003-2019 Christopher M. Kohlhoff <chris@kohlhoff.com>
License: Boost Software License, Version 1.0
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.
此差异已折叠。
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
Ceph Coding style
-----------------
Coding style is most important for new code and (to a lesser extent)
revised code. It is not worth the churn to simply reformat old code.
C code
------
For C code, we conform by the Linux kernel coding standards:
https://www.kernel.org/doc/Documentation/process/coding-style.rst
C++ code
--------
For C++ code, things are a bit more complex. As a baseline, we use Google's
coding guide:
https://google.github.io/styleguide/cppguide.html
As an addendum to the above, we add the following guidelines, organized
by section.
* Naming > Type Names:
Google uses CamelCaps for all type names. We use two naming schemes:
- for naked structs (simple data containers), lower case with _t.
Yes, _t also means typedef. It's perhaps not ideal.
struct my_type_t {
int a = 0, b = 0;
void encode(...) ...
...
};
- for full-blown classes, CamelCaps, private: section, accessors,
probably not copyable, etc.
* Naming > Variable Names:
Google uses _ suffix for class members. That's ugly. We'll use
a m_ prefix, like so, or none at all.
class Foo {
public:
int get_foo() const { return m_foo; }
void set_foo(int foo) { m_foo = foo; }
private:
int m_foo;
};
* Naming > Constant Names:
Google uses kSomeThing for constants. We prefer SOME_THING.
* Naming > Function Names:
Google uses CamelCaps. We use_function_names_with_underscores().
Accessors are the same, {get,set}_field().
* Naming > Enumerator Names:
Name them like constants, as above (SOME_THING).
* Comments > File Comments:
Don't sweat it, unless the license varies from that of the project
(LGPL2.1 or LGPL3.0) or the code origin isn't reflected by the git history.
* Formatting > Tabs:
Indent width is two spaces. When runs of 8 spaces can be compressed
to a single tab character, do so. The standard Emacs/Vim settings
header is:
// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
// vim: ts=8 sw=2 smarttab ft=cpp
* Formatting > Conditionals:
- No spaces inside conditionals please, e.g.
if (foo) { // okay
if ( foo ) { // no
- Always use newline following if, and use braces:
if (foo) {
bar; // like this, even for a one-liner
}
if (foo)
bar; // no, usually harder to parse visually
if (foo) bar; // no
if (foo) { bar; } // definitely no
* Header Files -> The `#define` Guard:
`#pragma once` is allowed for simplicity at the expense of
portability since `#pragma once` is widely supported and is known
to work on GCC and Clang.
The following guidelines have not been followed in the legacy code,
but are worth mentioning and should be followed strictly for new code:
* Header Files > Function Parameter Ordering:
Inputs, then outputs.
* Classes > Explicit Constructors:
You should normally mark constructors explicit to avoid getting silent
type conversions.
* Classes > Copy Constructors:
- Use defaults for basic struct-style data objects.
- Most other classes should DISALLOW_COPY_AND_ASSIGN.
- In rare cases we can define a proper copy constructor and operator=.
* Other C++ Features > Reference Arguments:
Only use const references. Use pointers for output arguments.
* Other C++ Features > Avoid Default Arguments:
They obscure the interface.
Python code
-----------
For new python code, PEP-8 should be observed:
https://www.python.org/dev/peps/pep-0008/
Existing code can be refactored to adhere to PEP-8, and cleanups are welcome.
JavaScript / TypeScript
-----------------------
For Angular code, we follow the official Angular style guide:
https://angular.io/guide/styleguide
To check whether your code is conformant with the style guide, we use a
combination of TSLint, Codelyzer and Prettier:
https://palantir.github.io/tslint/
http://codelyzer.com/
https://prettier.io/
PROJECT_NAME = Ceph
OUTPUT_DIRECTORY = build-doc/doxygen
STRIP_FROM_PATH = src/
STRIP_FROM_INC_PATH = src/include
BUILTIN_STL_SUPPORT = YES
SYMBOL_CACHE_SIZE = 2
WARN_IF_UNDOCUMENTED = NO
INPUT = src
RECURSIVE = YES
EXCLUDE = src/googletest \
src/test/virtualenv \
src/out \
src/tracing \
src/civetweb
VERBATIM_HEADERS = NO
GENERATE_HTML = NO
GENERATE_LATEX = NO
GENERATE_XML = YES
XML_PROGRAMLISTING = NO
HAVE_DOT = YES
DOT_TRANSPARENT = YES
JAVADOC_AUTOBRIEF = YES
>=16.0.0
--------
* The allowable options for some "radosgw-admin" commands have been changed.
* "mdlog-list", "datalog-list", "sync-error-list" no longer accepts
start and end dates, but does accept a single optional start marker.
* "mdlog-trim", "datalog-trim", "sync-error-trim" only accept a
single marker giving the end of the trimmed range.
* Similarly the date ranges and marker ranges have been removed on
the RESTful DATALog and MDLog list and trim operations.
>=15.0.0
--------
* The ``ceph df`` command now lists the number of pgs in each pool.
* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
by default. However, if enabled, user now have to pass the
``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
sure of configuring pool size 1.
* librbd now inherits the stripe unit and count from its parent image upon creation.
This can be overridden by specifying different stripe settings during clone creation.
* The balancer is now on by default in upmap mode. Since upmap mode requires
``require_min_compat_client`` luminous, new clusters will only support luminous
and newer clients by default. Existing clusters can enable upmap support by running
``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
the balancer off using the ``ceph balancer off`` command. In earlier versions,
the balancer was included in the ``always_on_modules`` list, but needed to be
turned on explicitly using the ``ceph balancer on`` command.
* MGR: the "cloud" mode of the diskprediction module is not supported anymore
and the ``ceph-mgr-diskprediction-cloud`` manager module has been removed. This
is because the external cloud service run by ProphetStor is no longer accessible
and there is no immediate replacement for it at this time. The "local" prediction
mode will continue to be supported.
* Cephadm: There were a lot of small usability improvements and bug fixes:
* Grafana when deployed by Cephadm now binds to all network interfaces.
* ``cephadm check-host`` now prints all detected problems at once.
* Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
when generating an SSL certificate for Grafana.
* The Alertmanager is now correctly pointed to the Ceph Dashboard
* ``cephadm adopt`` now supports adopting an Alertmanager
* ``ceph orch ps`` now supports filtering by service name
* ``ceph orch host ls`` now marks hosts as offline, if they are not
accessible.
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
nfs-ns::
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
yaml representation that is consumable by ``ceph orch apply``. In addition,
the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
``--format json-pretty``.
* CephFS: Automatic static subtree partitioning policies may now be configured
using the new distributed and random ephemeral pinning extended attributes on
directories. See the documentation for more information:
https://docs.ceph.com/docs/master/cephfs/multimds/
* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
the OSD specification before deploying OSDs. This makes it possible to
verify that the specification is correct, before applying it.
* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
``radosgw-admin orphans find``, ``radosgw-admin orphans finish``, and
``radosgw-admin orphans list-jobs`` -- have been deprecated. They have
not been actively maintained and they store intermediate results on
the cluster, which could fill a nearly-full cluster. They have been
replaced by a tool, currently considered experimental,
``rgw-orphan-list``.
* RBD: The name of the rbd pool object that is used to store
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
to "rbd_trash_purge_schedule". Users that have already started using
``rbd trash purge schedule`` functionality and have per pool or namespace
schedules configured should copy "rbd_trash_trash_purge_schedule"
object to "rbd_trash_purge_schedule" before the upgrade and remove
"rbd_trash_purge_schedule" using the following commands in every RBD
pool and namespace where a trash purge schedule was previously
configured::
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
or use any other convenient way to restore the schedule after the
upgrade.
* librbd: The shared, read-only parent cache has been moved to a separate librbd
plugin. If the parent cache was previously in-use, you must also instruct
librbd to load the plugin by adding the following to your configuration::
rbd_plugins = parent_cache
* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
If any OSD has repaired more than this many I/O errors in stored data a
``OSD_TOO_MANY_REPAIRS`` health warning is generated.
* Introduce commands that manipulate required client features of a file system::
ceph fs required_client_features <fs name> add <feature>
ceph fs required_client_features <fs name> rm <feature>
ceph fs feature ls
* OSD: A new configuration option ``osd_compact_on_start`` has been added which triggers
an OSD compaction on start. Setting this option to ``true`` and restarting an OSD
will result in an offline compaction of the OSD prior to booting.
* OSD: the option named ``bdev_nvme_retry_count`` has been removed. Because
in SPDK v20.07, there is no easy access to bdev_nvme options, and this
option is hardly used, so it was removed.
* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Alpine build related script, documentation and test have been removed since
the most updated APKBUILD script of Ceph is already included by Alpine Linux's
aports repository.
* fs: Names of new FSs, volumes, subvolumes and subvolume groups can only
contain alphanumeric and ``-``, ``_`` and ``.`` characters. Some commands
or CephX credentials may not work with old FSs with non-conformant names.
* `blacklist` has been replaced with `blocklist` throughout. The following commands have changed:
- ``ceph osd blacklist ...`` are now ``ceph osd blocklist ...``
- ``ceph <tell|daemon> osd.<NNN> dump_blacklist`` is now ``ceph <tell|daemon> osd.<NNN> dump_blocklist``
* The following config options have changed:
- ``mon osd blacklist default expire`` is now ``mon osd blocklist default expire``
- ``mon mds blacklist interval`` is now ``mon mds blocklist interval``
- ``mon mgr blacklist interval`` is now ''mon mgr blocklist interval``
- ``rbd blacklist on break lock`` is now ``rbd blocklist on break lock``
- ``rbd blacklist expire seconds`` is now ``rbd blocklist expire seconds``
- ``mds session blacklist on timeout`` is now ``mds session blocklist on timeout``
- ``mds session blacklist on evict`` is now ``mds session blocklist on evict``
* The following librados API calls have changed:
- ``rados_blacklist_add`` is now ``rados_blocklist_add``; the former will issue a deprecation warning and be removed in a future release.
- ``rados.blacklist_add`` is now ``rados.blocklist_add`` in the C++ API.
* The JSON output for the following commands now shows ``blocklist`` instead of ``blacklist``:
- ``ceph osd dump``
- ``ceph <tell|daemon> osd.<N> dump_blocklist``
* caps: MON and MDS caps can now be used to restrict client's ability to view
and operate on specific Ceph file systems. The FS can be specificed using
``fsname`` in caps. This also affects subcommand ``fs authorize``, the caps
produce by it will be specific to the FS name passed in its arguments.
* fs: "fs authorize" now sets MON cap to "allow <perm> fsname=<fsname>"
instead of setting it to "allow r" all the time.
Last updated: 2017-04-08
The FreeBSD build will build most of the tools in Ceph.
Note that the (kernel) RBD dependent items will not work
I started looking into Ceph, because the HAST solution with CARP and
ggate did not really do what I was looking for. But I'm aiming for
running a Ceph storage cluster on storage nodes that are running ZFS.
In the end the cluster would be running bhyve on RBD disk that are stored in
Ceph.
The FreeBSD build will build most of the tools in Ceph.
Progress from last report:
==========================
Most important change:
- A port is submitted: net/ceph-devel.
Other improvements:
* A new ceph-devel update will be submitted in April
- Ceph-Fuse works, allowing to mount a CephFS on a FreeBSD system and do
some work on it.
- Ceph-disk prepare and activate work for FileStore on ZFS, allowing
easy creation of OSDs.
- RBD is actually buildable and can be used to manage RADOS BLOCK
DEVICEs.
- Most of the awkward dependencies on Linux-isms are deleted only
/bin/bash is there to stay.
Getting the FreeBSD work on Ceph:
=================================
pkg install net/ceph-devel
Or:
cd "place to work on this"
git clone https://github.com/wjwithagen/ceph.git
cd ceph
git checkout wip.FreeBSD
Building Ceph
=============
- Go and start building
./do_freebsd.sh
Parts not (yet) included:
=========================
- KRBD
Kernel Rados Block Devices is implemented in the Linux kernel
And perhaps ggated could be used as a template since it does some of
the same, other than just between 2 disks. And it has a userspace
counterpart.
- BlueStore.
FreeBSD and Linux have different AIO API, and that needs to be made
compatible Next to that is there discussion in FreeBSD about
aio_cancel not working for all devicetypes
- CephFS as native filesystem
(Ceph-fuse does work.)
Cython tries to access an internal field in dirent which does not
compile
Build Prerequisites
===================
Compiling and building Ceph is tested on 12-CURRENT, but I guess/expect
11-RELEASE will also work. And Clang is at 3.8.0.
It uses the CLANG toolset that is available, 3.7 is no longer tested,
but was working when that was with 11-CURRENT.
Clang 3.4 (on 10.2-STABLE) does not have all required capabilities to
compile everything
The following setup will get things running for FreeBSD:
This all require root privilidges.
- Install bash and link it in /bin
sudo pkg install bash
sudo ln -s /usr/local/bin/bash /bin/bash
Getting the FreeBSD work on Ceph:
=================================
- cd "place to work on this"
git clone https://github.com/wjwithagen/ceph.git
cd ceph
git checkout wip.FreeBSD.201702
Building Ceph
=============
- Go and start building
./do_freebsd.sh
Parts not (yet) included:
=========================
- KRBD
Kernel Rados Block Devices is implemented in the Linux kernel
It seems that there used to be a userspace implementation first.
And perhaps ggated could be used as a template since it does some of
the same, other than just between 2 disks. And it has a userspace
counterpart.
- BlueStore.
FreeBSD and Linux have different AIO API, and that needs to be made
compatible Next to that is there discussion in FreeBSD about
aio_cancel not working for all devicetypes
- CephFS
Cython tries to access an internal field in dirent which does not
compile
Tests that verify the correct working of the above are also excluded
from the testset
Tests not (yet) include:
=======================
- None, although some test can fail if running tests in parallel and there is
not enough swap. Then tests will start to fail in strange ways.
Task to do:
===========
- Build an automated test platform that will build ceph/master on
FreeBSD and report the results back to the Ceph developers. This will
increase the maintainability of the FreeBSD side of things.
Developers are signalled that they are using Linux-isms that will not
compile/run on FreeBSD Ceph has several projects for this: Jenkins,
teuthology, pulpito, ...
But even just a while { compile } loop and report the build data on a
static webpage would do for starters.
- Run integration tests to see if the FreeBSD daemons will work with a
Linux Ceph platform.
- Compile and test the user space RBD (Rados Block Device).
- Investigate and see if an in-kernel RBD device could be developed a la
'ggate'
- Investigate the keystore, which could be kernel embedded on Linux an
currently prevents building Cephfs and some other parts.
- Scheduler information is not used atm, because the schedulers work
rather different. But at a certain point in time, this would need some
attention:
in: ./src/common/Thread.cc
- Improve the FreeBSD /etc/rc.d initscripts in the Ceph stack. Both
for testing, but mainly for running Ceph on production machines.
Work on ceph-disk and ceph-deploy to make it more FreeBSD and ZFS
compatible.
- Build test-cluster and start running some of the teuthology integration
tests on these.
Teuthology want to build its own libvirt and that does not quite work
with all the packages FreeBSD already has in place. Lots of minute
details to figure out
- Design a virtual disk implementation that can be used with behyve and
attached to an RBD image.
The AIX build will only build the librados library.
Build Prerequisites
===================
The following AIX packages are required for developing and compilation, they have been installed via the AIX-rpm (rpm) packages:
AIX-rpm
tcl
tk
expect
curl
readline
libpng
mpfr
m4
autoconf
gettext
less
perl
gdbm
pcre
rsync
zlib
gcc-cpp
libffi
pkg-config
libiconv
glib2
info
libidn
openldap
python-tools
bzip2
python
sed
grep
libtool
nspr
nss-util
sqlite
nss-softokn
nss-softokn-freebl
libstdc++
gmp
coreutils
nss
nss-tools
nss-sysinit
nspr-devel
nss-util-devel
nss-softokn-devel
nss-softokn-freebl-devel
nss-devel
make
libsigsegv
automake
libmpc
libgcc
gcc
libstdc++-devel
gcc-c++
adns
tcsh
bash
getopt
db4
expat
tcl
freetype2
fontconfig
libXrender
libXft
tk
python-libs
tkinter
gdb
git
Download and Compile Boost 1.59 (or higher)
Building Ceph
=============
export CXX="c++ -maix64"
export CFLAGS="-g -maix64"
export OBJECT_MODE=64
export LDFLAGS="-L/usr/lib64 -L/opt/freeware/lib64 -L<pathtoboost>/boost_1_59_0/stage/lib -Wl,-brtl -Wl,-bbigtoc"
export CXXFLAGS="-I/opt/freeware/include -I<pathtoboost>/boost_1_59_0"
./autogen.sh
Then manually modify the config.guess
- *:AIX:*:[456])
+ *:AIX:*:[4567])
./configure --disable-server --without-fuse --without-tcmalloc --without-libatomic-ops --without-libaio --without-libxfs
cd src
gmake librados.la
The Solaris build will only build the librados library.
Build Prerequisites
===================
The following Solaris packages are required for compilation:
git
autoconf
libtool
automake
gcc-c++-48
gnu-make
(use the "pkg install <packagename>" command to install, as root)
Download and Compile Boost 1.59 (or higher)
Building Ceph
=============
export LDFLAGS="-m64 -L<pathtoboost>/stage/lib -L/usr/lib/mps/64"
export CPPFLAGS="-m64 -I<pathtoboost>"
export CXXFLAGS="-m64"
export CFLAGS="-m64"
./autogen.sh
./configure --disable-server --without-fuse --without-tcmalloc --without-libatomic-ops --without-libaio --without-libxfs
cd src
gmake librados.la
About
-----
Ceph Windows support is currently a work in progress. For now, the main focus
is the client side, allowing Windows hosts to consume rados, rbd and cephfs
resources.
Building
--------
At the moment, mingw gcc is the only supported compiler for building ceph
components for Windows. Support for msvc and clang will be added soon.
`win32_build.sh`_ can be used for cross compiling Ceph and its dependencies.
It may be called from a Linux environment, including Windows Subsystem for
Linux. MSYS2 and CygWin may also work but those weren't tested.
.. _win32_build.sh: win32_build.sh
The script accepts the following flags:
============ =============================== ===============================
Flag Description Default value
============ =============================== ===============================
CEPH_DIR The Ceph source code directory. The same as the script.
BUILD_DIR The directory where the $CEPH_DIR/build
generated artifacts will be
placed.
DEPS_DIR The directory where the Ceph $CEPH_DIR/build.deps
dependencies will be built.
NUM_WORKERS The number of workers to use The number of vcpus
when building Ceph. available
CLEAN_BUILD Clean the build directory.
SKIP_BUILD Run cmake without actually
performing the build.
============ =============================== ===============================
Current status
--------------
The rados and rbd binaries and libs compile successfully and can be used on
Windows, successfully connecting to the cluster and consuming pools.
The libraries have to be built statically at the moment. The reason is that
there are a few circular library dependencies or unspecified dependencies,
which isn't supported when building DLLs. This mostly affects ``cls`` libraries.
A significant number of tests from the ``tests`` directory have been port,
providing adequate coverage.
Submitting Patches to Ceph - Backports
======================================
Most likely you're reading this because you intend to submit a GitHub pull
request ("PR") targeting one of the stable branches ("nautilus", etc.) at
https://github.com/ceph/ceph.
Before you open that PR, please read this entire document or, at the very least,
the following two sections: `General principles`_ and `Cherry-picking rules`_.
.. contents::
:depth: 3
General principles
------------------
To help the people who will review your backport, please state either in the
backport PR, or in the backport tracker issue, or in the master tracker issue:
1. what bug is fixed
2. why this fix is the minimal way to do it
3. why does this need to be fixed in <release>
The above should be followed especially in cases when the backport could be seen
as introducing, into a stable branch, code that is not related to a particular
bug or issue.
Rationale: every modification of a stable branch carries a certain risk of
introducing a regression. To minimize this risk, backports should be as
straightforward and narrowly-targeted as possible. As a stable release series
ages, the importance of following these general principles rises.
Cherry-picking rules
--------------------
The following rules, which have been codified from "best practices" developed
over years of backporting, apply to the actual backport implementation:
* all fixes should land in master first
* commits to stable branches should be cherry-picked from master
* before starting to cherry-pick a set of commits from master, grep the master git history for the SHA1 of each master commit (using ``git log --grep``) to check for follow-up fixes. Include any follow-up fixes found in the set of commits to be cherry-picked.
* cherry-picks must be done using ``git cherry-pick -x``
* if a commit could not be cherry-picked from master, the commit message must explain why that was not possible
* the commit message generated by ``git cherry-pick -x`` must not be modified, except to add a "Conflicts" section below the "cherry picked from commit ..." line added by git
* the "Conflicts" section must mention all files where changes had to be made manually (not just conflicts flagged by git)
* the "Conflicts" section should also describe the manual changes that were made
* if a change is to be backported to multiple stable branches, a tracker issue is needed, so the backports can be tracked (if a change is only to be backported to the most recent stable branch, a tracker issue is not strictly required)
For more information on tracker issues, see `Tracker workflow`_.
For more information on conflict resolution and writing the "Conflicts" section,
see `Conflict resolution`_.
Adhering to these rules will make your backport easier for reviewers to
understand. Not adhering to these rules creates additional work for reviewers
and may cause your backport PR to be rejected.
Notes on the cherry-picking rules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
What does "all fixes should land in master first" mean? What if I just need to
fix the issue in <release>?
As the person fixing the issue, you are required to first check whether the
issue exists in master. If it does, then the proper course of action is to
create a master tracker (see `Tracker workflow`_) and fix the issue in master,
first, and only then cherry-pick the fix to the stable branches that have the
issue.
If the issue exists in the stable branch, but not in master, there are several
possibilities:
1. it's a regression introduced into the stable branch by a bad backport
2. the issue was fixed in master by some massive refactoring that cannot be backported
3. the issue was already fixed in master by a cherry-pickable commit
In cases 1 and 2, it's permissible to fix the issue directly in the most recent
stable branch, subject to the rule "if a commit could not be cherry-picked from
master, the commit message must explain why that was not possible". Once the
fix has landed in the most recent stable branch, it can be cherry-picked to
older stable branches from there.
In case 3, the issue should be handled like any other backport - read on.
Tracker workflow
----------------
Any change that is to be backported to multiple stable branches should have
an associated tracker issue at https://tracker.ceph.com.
For fixes already merged to master, this may have already been done - see the
``Fixes:`` line in the master PR. If the master PR has already been merged and
there is no associated master tracker issue, you can create a master tracker
issue and fill in the fields as described below.
This master tracker issue should be in the "Bug" or "Feature"
trackers of the relevant subproject under the "Ceph" parent project (or
in the "Ceph" project itself if none of the subprojects are a good fit).
The stable branches to which the master changes are to be cherry-picked should
be listed in the "Backport" field. For example::
Backport: mimic, nautilus
Once the PR targeting master is open, add the PR number assigned by GitHub to
the tracker issue. For example, if the PR number is 99999::
Pull request ID: 99999
Once the master PR has been merged, after checking that the change really needs
to be backported and the Backport field has been populated, change the master
tracker issue's ``Status`` field to "Pending Backport".
Status: Pending Backport
If you do not have sufficient permissions to modify any field of the tracker
issue, just add a comment describing what changes you would like to make.
Someone with permissions will make the necessary modifications on your behalf.
For straightforward backports, that's all that you (as the developer of the fix)
need to do. Volunteers from the `Stable Releases and Backports team`_ will
proceed to create Backport issues to track the necessary backports and stage the
backports by opening GitHub PRs with the cherry-picks. If you don't want to
wait, and provided you have sufficient permissions at https://tracker.ceph.com,
you can `create Backport tracker issues` and `stage backports`_ yourself. In
that case, read on.
.. _`create backport tracker issues`:
.. _`backport tracker issue`:
Creating Backport tracker issues
--------------------------------
To track backporting efforts, "backport tracker issues" can be created from
a parent "master tracker issue". The master tracker issue is described in the
previous section, `Tracker workflow`_. This section focuses the backport tracker
issue.
Once the entire `Tracker workflow`_ has been completed for the master issue,
issues can be created in the Backport tracker for tracking the backporting work.
Under ordinary circumstances, the developer who merges the master PR will flag
the master tracker issue for backport by changing the Status to "Pending
Backport", and volunteers from the `Stable Releases and Backports team`_
periodically create backport tracker issues by running the
``backport-create-issue`` script. They also do the actual backporting. But that
does take time and you may not want to wait.
You might be tempted to forge ahead and create the backport issues yourself.
Please don't do that - it is difficult (bordering on impossible) to get all the
fields correct when creating backport issues manually, and why even try when
there is a script that gets it right every time? Setting up the script requires
a small up-front time investment. Once that is done, creating backport issues
becomes trivial.
The backport-create-issue script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The script used to create backport issues is located at
``src/script/backport-create-issue`` in the master branch. Though there might be
an older version of this script in a stable branch, do not use it. Only use the
most recent version from master.
Once you have the script somewhere in your PATH, you can proceed to install the
dependencies.
The dependencies are:
* python3
* python-redmine
Python 3 should already be present on any recent Linux installation. The second
dependency, `python-redmine`_, can be obtained from PyPi::
pip3 install --user python-redmine
.. _`python-redmine`: https://pypi.org/project/python-redmine/
Then, try to run the script::
backport-create-issue --help
This should produce a usage message.
Finally, run the script to actually create the Backport issues.
For example, if the tracker issue number is 55555::
backport-create-issue --user <tracker_username> --password <tracker_password> 55555
The script needs to know your https://tracker.ceph.com credentials in order to
authenticate to Redmine. In lieu of providing your literal username and password
on the command line, you could also obtain a REST API key ("My account" -> "API
access key") and run the script like so::
backport-create-issue --key <tracker_api_key> 55555
.. _`stage backports`:
.. _`stage the backport`:
.. _`staging a backport`:
Opening a backport PR
---------------------
Once the `Tracker workflow`_ is completed and the `backport tracker issue`_ has
been created, it's time to open a backport PR. One possibility is to do this
manually, while taking care to follow the `cherry-picking rules`_. However, this
can result in a backport that is not properly staged. For example, the PR
description might not contain a link to the `backport tracker issue`_ (a common
oversight). You might even forget to update the `backport tracker issue`_.
In the past, much time was lost, and much frustration caused, by the necessity
of staging backports manually. Now, fortunately, there is a script available
which automates the process and takes away most of the guesswork.
The ceph-backport.sh script
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similar to the case of `creating backport tracker issues`_, staging the actual
backport PR and updating the Backport tracker issue is difficult - if not
impossible - to get right if you're doing it manually, and quickly becomes
tedious if you do it more than once in a long while.
The ``ceph-backport.sh`` script automates the entire process of cherry-picking
the commits from the master PR, opening the GitHub backport PR, and
cross-linking the GitHub backport PR with the correct Backport tracker issue.
The script can also be used to good effect if you have already manually prepared
the backport branch with the cherry-picks in it.
The script is located at ``src/script/ceph-backport.sh`` in the ``master``
branch. Though there might be an older version of this script in a stable
branch, do not use it. Only use the most recent version from the master branch.
To do this from anywhere and from any branch use the following
alias that will use the most recent script in ``upstream/master`` of your
local ceph clone on every call::
alias ceph-backport="bash <(git --git-dir=$pathToCephClone/.git --no-pager show upstream/master:src/script/ceph-backport.sh)"
``ceph-backport.sh`` is just a bash script, so the only dependency is ``bash``
itself, but it does need to be run in the top level of a local clone of
``ceph/ceph.git``. A small up-front time investment is required to get the
script working in your environment. This is because the script needs to
authenticate itself (i.e., as you) in order to use the GitHub and Redmine REST
API services.
The script is self-documenting. Just run the script and proceed from there.
Once the script has been set up properly, you can validate the setup like so::
ceph-backport.sh --setup
Once you have this saying "Overall setup is OK", you have two options for
staging the backport: either leave everything to the script, or prepare the
backport branch yourself and use the script only for creating the PR and
updating the Backport tracker issue.
If you prefer to leave everything to the script, just provide the Backport
tracker issue number to the script::
ceph-backport.sh 55555
The script will start by creating the backport branch in your local git clone.
The script always uses the following format for naming the branch::
wip-<backport_issue_number>-<name_of_stable_branch>
For example, if the Backport tracker issue number is 55555 and it's targeting
the stable branch "nautilus", the backport branch would be named::
wip-55555-nautilus
If you prefer to create the backport branch yourself, just do that. Be sure to
name the backport branch as described above. (It's a good idea to use this
branch naming convention for all your backporting work.) Then, run the script::
ceph-backport.sh 55555
The script will see that the backport branch already exists, and use it.
Once the script hits the first cherry-pick conflict, it will no longer provide
any cherry-picking assistance, so in that case it's up to you to resolve the conflict(s)
(as described in `Conflict resolution`_) and finish cherry-picking
all of the remaining commits. Once you are satisfied that the backport is complete in
your local branch, `ceph-backport.sh` can finish the job of creating the pull request
and updating the backport tracker issue. To make that happen, just re-run the script
exactly as you did before::
ceph-backport.sh $BACKPORT_TRACKER_ID
The script will detect that it is running from a branch with the same name as the one it
would normally create on the first run and continues after the cherry-picking phase.
For a quick reference on CLI, that contains above information, you can run::
ceph-backport.sh --usage
Conflict resolution
^^^^^^^^^^^^^^^^^^^
If git reports conflicts, the script will abort to allow you to resolve the
conflicts manually.
Once the conflicts are resolved, complete the cherry-pick ::
git cherry-pick --continue
Git will present a draft commit message with a "Conflicts" section.
Unfortunately, in recent versions of git, the Conflicts section is commented
out. Since the Conflicts section is mandatory for Ceph backports that do not
apply cleanly, you will need to uncomment the entire "Conflicts" section
of the commit message before committing the cherry-pick. You can also
include commentary on what the conflicts were and how you resolved
them. For example::
Conflicts:
src/foo/bar.cc
- mimic does not have blatz; use batlo instead
When editing the cherry-pick commit message, leave everything before the
"cherry picked from" line unchanged. Any edits you make should be in the part
following that line. Here is an example::
osd: check batlo before setting blatz
Setting blatz requires special precautions. Check batlo first.
Fixes: https://tracker.ceph.com/issues/99999
Signed-off-by: Random J Developer <random@developer.example.com>
(cherry picked from commit 01d73020da12f40ccd95ea1e49cfcf663f1a3a75)
Conflicts:
src/osd/batlo.cc
- add_batlo_check has an extra arg in newer code
Naturally, the ``Fixes`` line points to the master issue. You might be tempted
to modify it so it points to the backport issue, but - please - don't do that.
First, the master issue points to all the backport issues, and second, *any*
editing of the original commit message calls the entire backport into doubt,
simply because there is no good reason for such editing.
The part below the ``(cherry picked from commit ...)`` line is fair game for
editing. If you need to add additional information to the cherry-pick commit
message, append that information below this line. Once again: do not modify the
original commit message.
If you use `ceph-backport.sh` for your backport creation (which is recommended),
read up at the end of `The ceph-backport.sh script`_ on how to continue from here.
Labelling of backport PRs
-------------------------
Once the backport PR is open, the first order of business is to set the
Milestone tag to the stable release the backport PR is targeting. For example,
if the PR is targeting "nautilus", set the Milestone tag to "nautilus".
If you don't have sufficient GitHub permissions to set the Milestone, don't
worry. Members of the `Stable Releases and Backports team`_ periodically run
a script (``ceph-backport.sh --milestones``) which scans all PRs targetting stable
branches and automatically adds the correct Milestone tag if it is missing.
Next, check which component label was applied to the master PR corresponding to
this backport, and double-check that that label is applied to the backport PR as
well. For example, if the master PR carries the component label "core", the
backport PR should also get that label.
In general, it is the responsibility of the `Stable Releases and Backports
team`_ to ensure that backport PRs are properly labelled. If in doubt, just
leave the labelling to them.
.. _`backport PR reviewing`:
.. _`backport PR testing`:
.. _`backport PR merging`:
Reviewing, testing, and merging of backport PRs
-----------------------------------------------
Once your backport PR is open and the Milestone is set properly, the
`Stable Releases and Backports team` will take care of getting the PR
reviewed and tested. Once the PR is reviewed and tested, it will be merged.
If you would like to facilitate this process, you can solicit reviews and run
integration tests on the PR. In this case, add comments to the PR describing the
tests you ran and their results.
Once the PR has been reviewed and deemed to have undergone sufficient testing,
it will be merged. Even if you have sufficient GitHub permissions to merge the
PR, please do *not* merge it yourself. (Uncontrolled merging to stable branches
unnecessarily complicates the release preparation process, which is done by
volunteers.)
Stable Releases and Backports team
----------------------------------
Ceph has a `Stable Releases and Backports`_ team, staffed by volunteers,
which is charged with maintaining the stable releases and backporting bugfixes
from the master branch to them. (That team maintains a wiki, accessible by
clicking the `Stable Releases and Backports`_ link, which describes various
workflows in the backporting lifecycle.)
.. _`Stable Releases and Backports`: http://tracker.ceph.com/projects/ceph-releases/wiki
Ordinarily, it is enough to fill out the "Backport" field in the bug (tracker
issue). The volunteers from the Stable Releases and Backports team will
backport the fix, run regression tests on it, and include it in one or more
future point releases.
Submitting Patches to Ceph - Kernel Components
==============================================
Submission of patches to the Ceph kernel code is subject to the same rules
and guidelines as any other patches to the Linux Kernel. These are set out in
``Documentation/process/submitting-patches.rst`` in the kernel source tree.
What follows is a condensed version of those rules and guidelines, updated based
on the Ceph project's best practices.
.. contents::
:depth: 3
Signing contributions
---------------------
In order to keep the record of code attribution clean within the source
repository, follow these guidelines for signing patches submitted to the
project. These definitions are taken from those used by the Linux kernel
and many other open source projects.
1. Sign your work
#################
To improve tracking of who did what, especially with patches that can
percolate to their final resting place in the kernel through several
layers of maintainers, we've introduced a "sign-off" procedure on
patches that are being emailed around.
The sign-off is a simple line at the end of the explanation for the
patch, which certifies that you wrote it or otherwise have the right to
pass it on as a open-source patch. The rules are pretty simple: if you
can certify the below:
Developer's Certificate of Origin 1.1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
then you just add a line saying ::
Signed-off-by: Random J Developer <random@developer.example.org>
using your real name (sorry, no pseudonyms or anonymous contributions.)
Some people also put extra tags at the end. They'll just be ignored for
now, but you can do this to mark internal company procedures or just
point out some special detail about the sign-off.
If you are a subsystem or branch maintainer, sometimes you need to slightly
modify patches you receive in order to merge them, because the code is not
exactly the same in your tree and the submitters'. If you stick strictly to
rule (c), you should ask the submitter to rediff, but this is a totally
counter-productive waste of time and energy. Rule (b) allows you to adjust
the code, but then it is very impolite to change one submitter's code and
make them endorse your bugs. To solve this problem, it is recommended that
you add a line between the last Signed-off-by header and yours, indicating
the nature of your changes. While there is nothing mandatory about this, it
seems like prepending the description with your mail and/or name, all
enclosed in square brackets, is noticeable enough to make it obvious that
you are responsible for last-minute changes. Example ::
Signed-off-by: Random J Developer <random@developer.example.org>
[lucky@maintainer.example.org: struct foo moved from foo.c to foo.h]
Signed-off-by: Lucky K Maintainer <lucky@maintainer.example.org>
This practise is particularly helpful if you maintain a stable branch and
want at the same time to credit the author, track changes, merge the fix,
and protect the submitter from complaints. Note that under no circumstances
can you change the author's identity (the From header), as it is the one
which appears in the changelog.
Special note to back-porters: It seems to be a common and useful practise
to insert an indication of the origin of a patch at the top of the commit
message (just after the subject line) to facilitate tracking. For instance,
here's what we see in 2.6-stable ::
Date: Tue May 13 19:10:30 2008 +0000
SCSI: libiscsi regression in 2.6.25: fix nop timer handling
commit 4cf1043593db6a337f10e006c23c69e5fc93e722 upstream
And here's what appears in 2.4 ::
Date: Tue May 13 22:12:27 2008 +0200
wireless, airo: waitbusy() won't delay
[backport of 2.6 commit b7acbdfbd1f277c1eb23f344f899cfa4cd0bf36a]
Whatever the format, this information provides a valuable help to people
tracking your trees, and to people trying to trouble-shoot bugs in your
tree.
2. When to use ``Acked-by:`` and ``Cc:``
########################################
The ``Signed-off-by:`` tag indicates that the signer was involved in the
development of the patch, or that he/she was in the patch's delivery path.
If a person was not directly involved in the preparation or handling of a
patch but wishes to signify and record their approval of it then they can
arrange to have an ``Acked-by:`` line added to the patch's changelog.
``Acked-by:`` is often used by the maintainer of the affected code when that
maintainer neither contributed to nor forwarded the patch.
``Acked-by:`` is not as formal as ``Signed-off-by:``. It is a record that the acker
has at least reviewed the patch and has indicated acceptance. Hence patch
mergers will sometimes manually convert an acker's "yep, looks good to me"
into an ``Acked-by:``.
``Acked-by:`` does not necessarily indicate acknowledgement of the entire patch.
For example, if a patch affects multiple subsystems and has an ``Acked-by:`` from
one subsystem maintainer then this usually indicates acknowledgement of just
the part which affects that maintainer's code. Judgement should be used here.
When in doubt people should refer to the original discussion in the mailing
list archives.
If a person has had the opportunity to comment on a patch, but has not
provided such comments, you may optionally add a "Cc:" tag to the patch.
This is the only tag which might be added without an explicit action by the
person it names. This tag documents that potentially interested parties
have been included in the discussion
3. Using ``Reported-by:``, ``Tested-by:`` and ``Reviewed-by:``
##############################################################
If this patch fixes a problem reported by somebody else, consider adding a
``Reported-by:`` tag to credit the reporter for their contribution. This tag should
not be added without the reporter's permission, especially if the problem was
not reported in a public forum. That said, if we diligently credit our bug
reporters, they will, hopefully, be inspired to help us again in the future.
A ``Tested-by:`` tag indicates that the patch has been successfully tested (in
some environment) by the person named. This tag informs maintainers that
some testing has been performed, provides a means to locate testers for
future patches, and ensures credit for the testers.
``Reviewed-by:``, instead, indicates that the patch has been reviewed and found
acceptable according to the Reviewer's Statement:
Reviewer's statement of oversight
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By offering my ``Reviewed-by:`` tag, I state that:
(a) I have carried out a technical review of this patch to
evaluate its appropriateness and readiness for inclusion into
the mainline kernel.
(b) Any problems, concerns, or questions relating to the patch
have been communicated back to the submitter. I am satisfied
with the submitter's response to my comments.
(c) While there may be things that could be improved with this
submission, I believe that it is, at this time, (1) a
worthwhile modification to the kernel, and (2) free of known
issues which would argue against its inclusion.
(d) While I have reviewed the patch and believe it to be sound, I
do not (unless explicitly stated elsewhere) make any
warranties or guarantees that it will achieve its stated
purpose or function properly in any given situation.
A ``Reviewed-by`` tag is a statement of opinion that the patch is an
appropriate modification of the kernel without any remaining serious
technical issues. Any interested reviewer (who has done the work) can
offer a ``Reviewed-by`` tag for a patch. This tag serves to give credit to
reviewers and to inform maintainers of the degree of review which has been
done on the patch. ``Reviewed-by:`` tags, when supplied by reviewers known to
understand the subject area and to perform thorough reviews, will normally
increase the likelihood of your patch getting into the kernel.
Preparing and sending patches
-----------------------------
For the kernel client, patches are expected to be emailed directly to the
email list ``ceph-devel@vger.kernel.org`` (note: *not* ``dev@ceph.io``) and reviewed
in the email list.
The best way to generate a patch for manual submission is to work from
a Git checkout of the Ceph kernel client (kernel modules) repository located at
https://github.com/ceph/ceph-client. You can then generate patches
with the 'git format-patch' command. For example,
.. code-block:: bash
$ git format-patch HEAD^^ -o mything
will take the last two commits and generate patches in the mything/
directory. The commit you specify on the command line is the
'upstream' commit that you are diffing against. Note that it does
not necessarily have to be an ancestor of your current commit. You
can do something like
.. code-block:: bash
$ git checkout -b mything
# ... do lots of stuff ...
$ git fetch
# ...find out that origin/unstable has also moved forward...
$ git format-patch origin/unstable -o mything
and the patches will be against origin/unstable.
The ``-o`` dir is optional; if left off, the patch(es) will appear in
the current directory. This can quickly get messy.
You can also add ``--cover-letter`` and get a '0000' patch in the
mything/ directory. That can be updated to include any overview
stuff for a multipart patch series. If it's a single patch, don't
bother.
Make sure your patch does not include any extra files which do not
belong in a patch submission. Make sure to review your patch -after-
generated it with ``diff(1)``, to ensure accuracy.
If your changes produce a lot of deltas, you may want to look into
splitting them into individual patches which modify things in
logical stages. This will facilitate easier reviewing by other
kernel developers, very important if you want your patch accepted.
There are a number of scripts which can aid in this.
The ``git send-email`` command make it super easy to send patches
(particularly those prepared with git format patch). It is careful to
format the emails correctly so that you don't have to worry about your
email client mangling whitespace or otherwise screwing things up. It
works like so:
.. code-block:: bash
$ git send-email --to ceph-devel@vger.kernel.org my.patch
for a single patch, or
.. code-block:: bash
$ git send-email --to ceph-devel@vger.kernel.org mything
to send a whole patch series (prepared with, say, git format-patch).
No MIME, no links, no compression, no attachments. Just plain text
------------------------------------------------------------------
Developers need to be able to read and comment on the changes you are
submitting. It is important for a kernel developer to be able to
"quote" your changes, using standard e-mail tools, so that they may
comment on specific portions of your code.
For this reason, all patches should be submitting e-mail "inline".
WARNING: Be wary of your editor's word-wrap corrupting your patch,
if you choose to cut-n-paste your patch.
Do not attach the patch as a MIME attachment, compressed or not.
Many popular e-mail applications will not always transmit a MIME
attachment as plain text, making it impossible to comment on your
code. A MIME attachment also takes Linus a bit more time to process,
decreasing the likelihood of your MIME-attached change being accepted.
Exception: If your mailer is mangling patches then someone may ask
you to re-send them using MIME.
Style Guide
-----------
The Linux Kernel has coding style conventions, which are set forth in
``Documentation/process/coding-style.rst``. Please adhere to these conventions.
==========================
Submitting Patches to Ceph
==========================
Patches to Ceph can be divided into three categories:
1. patches targeting Ceph kernel code
2. patches targeting the "master" branch
3. patches targeting stable branches (e.g.: "nautilus")
Some parts of Ceph - notably the RBD and CephFS kernel clients - are maintained
within the Linux Kernel. For patches targeting this code, please refer to the
file ``SubmittingPatches-kernel.rst``.
The rest of this document assumes that your patch relates to Ceph code that is
maintained in the GitHub repository https://github.com/ceph/ceph
If you have a patch that fixes an issue, feel free to open a GitHub pull request
("PR") targeting the "master" branch, but do read this document first, as it
contains important information for ensuring that your PR passes code review
smoothly.
For patches targeting stable branches (e.g. "nautilus"), please also see
the file ``SubmittingPatches-backports.rst``.
.. contents::
:depth: 3
Sign your work
--------------
The sign-off is a simple line at the end of the explanation for the
commit, which certifies that you wrote it or otherwise have the right to
pass it on as a open-source patch. The rules are pretty simple: if you
can certify the below:
Developer's Certificate of Origin 1.1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
then you just add a line saying ::
Signed-off-by: Random J Developer <random@developer.example.org>
using your real name (sorry, no pseudonyms or anonymous contributions.)
Git can sign off on your behalf
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please note that git makes it trivially easy to sign commits. First, set the
following config options::
$ git config --list | grep user
user.email=my_real_email_address@example.com
user.name=My Real Name
Then just remember to use ``git commit -s``. Git will add the ``Signed-off-by``
line automatically.
Separate your changes
---------------------
Group *logical changes* into individual commits.
If you have a series of bulleted modifications, consider separating each of
those into its own commit.
For example, if your changes include both bug fixes and performance enhancements
for a single component, separate those changes into two or more commits. If your
changes include an API update, and a new feature which uses that new API,
separate those into two patches.
On the other hand, if you make a single change that affects numerous
files, group those changes into a single commit. Thus a single logical change is
contained within a single patch. (If the change needs to be backported, that
might change the calculus, because smaller commits are easier to backport.)
Describe your changes
---------------------
Each commit has an associated commit message that is stored in git. The first
line of the commit message is the `commit title`_. The second line should be
left blank. The lines that follow constitute the `commit message`_.
A commit and its message should be focused around a particular change.
Commit title
^^^^^^^^^^^^
The text up to the first empty line in a commit message is the commit
title. It should be a single short line of at most 72 characters,
summarizing the change, and prefixed with the
subsystem or module you are changing. Also, it is conventional to use the
imperative mood in the commit title. Positive examples include::
mds: add perf counter for finisher of MDSRank
osd: make the ClassHandler::mutex private
More positive examples can be obtained from the git history of the ``master``
branch::
git log
Some negative examples (how *not* to title a commit message)::
update driver X
bug fix for driver X
fix issue 99999
Further to the last negative example ("fix issue 99999"), see `Fixes line(s)`_.
Commit message
^^^^^^^^^^^^^^
(This section is about the body of the commit message. Please also see
the preceding section, `Commit title`_, for advice on titling commit messages.)
In the body of your commit message, be as specific as possible. If the commit
message title was too short to fully state what the commit is doing, use the
body to explain not just the "what", but also the "why".
For positive examples, peruse ``git log`` in the ``master`` branch. A negative
example would be a commit message that merely states the obvious. For example:
"this patch includes updates for subsystem X. Please apply."
Fixes line(s)
^^^^^^^^^^^^^
If the commit fixes one or more issues tracked by http://tracker.ceph.com,
add a ``Fixes:`` line (or lines) to the commit message, to connect this change
to addressed issue(s) - for example::
Fixes: http://tracker.ceph.com/issues/12345
This line should be added just before the ``Signed-off-by:`` line (see `Sign
your work`_).
It helps reviewers to get more context of this bug and facilitates updating of
the bug tracker. Also, anyone perusing the git history will see this line and be
able to refer to the bug tracker easily.
Here is an example showing a properly-formed commit message::
doc: add "--foo" option to bar
This commit updates the man page for bar with the newly added "--foo"
option.
Fixes: http://tracker.ceph.com/issues/12345
Signed-off-by: Random J Developer <random@developer.example.org>
If a commit fixes a regression introduced by a different commit, please also
(in addition to the above) add a line referencing the SHA1 of the commit that
introduced the regression. For example::
Fixes: 9dbe7a003989f8bb45fe14aaa587e9d60a392727
PR best practices
-----------------
PRs should be opened on branches contained in your fork of
https://github.com/ceph/ceph.git - do not push branches directly to
``ceph/ceph.git``.
PRs should target "master". If you need to add a patch to a stable branch, such
as "nautilus", see the file ``SubmittingPatches-backports.rst``.
In addition to a base, or "target" branch, PRs have several other components:
the `PR title`_, the `PR description`_, labels, comments, etc. Of these, the PR
title and description are relevant for new contributors.
PR title
^^^^^^^^
If your PR has only one commit, the PR title can be the same as the commit title
(and GitHub will suggest this). If the PR has multiple commits, do not accept
the title GitHub suggest. Either use the title of the most relevant commit, or
write your own title. In the latter case, use the same "subsystem: short
description" convention described in `Commit title`_ for the PR title, with
the following difference: the PR title describes the entire set of changes,
while the `Commit title`_ describes only the changes in a particular commit.
Keep in mind that the PR titles feed directly into the script that generates
release notes and it is tedious to clean up non-conformant PR titles at release
time. This document places no limit on the length of PR titles, but be aware
that they are subject to editing as part of the release process.
PR description
^^^^^^^^^^^^^^
In addition to a title, the PR also has a description field, or "body".
The PR description is a place for summarizing the PR as a whole. It need not
duplicate information that is already in the commit messages. It can contain
notices to maintainers, links to tracker issues and other related information,
to-do lists, etc. The PR title and description should give readers a high-level
notion of what the PR is about, quickly enabling them to decide whether they
should take a closer look.
Flag your changes for backport
------------------------------
If you believe your changes should be backported to stable branches after the PR
is merged, open a tracker issue at https://tracker.ceph.com explaining:
1. what bug is fixed
2. why does the bug need to be fixed in <release>
and fill out the Backport field in the tracker issue. For example::
Backport: mimic, nautilus
For information on how backports are done in the Ceph project, refer to the
document ``SubmittingPatches-backports.rst``.
Test your changes
-----------------
Before opening your PR, it's a good idea to run tests on your patchset. Doing
that is simple, though the process can take a long time to complete, especially
on older machines with less memory and spinning disks.
The most simple test is to verify that your patchset builds, at least in your
own development environment. The commands for this are::
./install-deps.sh
./do_cmake.sh
make
Ceph comes with a battery of tests that can be run on a single machine. These
are collectively referred to as "make check", and can be run by executing the
following command::
./run-make-check.sh
If your patchset does not build, or if one or more of the "make check" tests
fails, but the error shown is not obviously related to your patchset, don't let
that dissuade you from opening a PR. The Ceph project has a Jenkins instance
which will build your PR branch and run "make check" on it in a controlled
environment.
Once your patchset builds and passes "make check", you can run even more tests
on it by issuing the following commands::
cd build
../qa/run-standalone.sh
Like "make check", the standalone tests take a long time to run. They also
produce voluminous output. If one or more of the standalone tests fails, it's
likely the relevant part of the output will have scrolled off your screen or
gotten swapped out of your buffer. Therefore, it makes sense to capture the
output in a file for later analysis.
Document your changes
---------------------
If you have added or modified any user-facing functionality, such as CLI
commands or their output, then the pull request must include appropriate updates
to documentation.
It is the submitter's responsibility to make the changes, and the reviewer's
responsibility to make sure they are not merging changes that do not
have the needed updates to documentation.
Where there are areas that have absent documentation, or there is no clear place
to note the change that is being made, the reviewer should contact the component
lead, who should arrange for the missing section to be created with sufficient
detail for the PR submitter to document their changes.
When writing and/or editing documentation, follow the Google Developer
Documentation Style Guide: https://developers.google.com/style/
#!/bin/sh
cd "$(dirname "$0")"
cd ..
TOPDIR=`pwd`
install -d -m0755 build-doc
if command -v dpkg >/dev/null; then
packages=`cat ${TOPDIR}/doc_deps.deb.txt`
for package in $packages; do
if [ "$(dpkg --status -- $package 2>&1 | sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required packages, please install them:" 1>&2
echo "sudo apt-get install -o APT::Install-Recommends=true $missing" 1>&2
exit 1
fi
elif command -v yum >/dev/null; then
for package in ant ditaa doxygen libxslt-devel libxml2-devel graphviz python3-devel python3-pip python3-virtualenv python3-Cython; do
if ! rpm -q --whatprovides $package >/dev/null ; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required packages, please install them:" 1>&2
echo "yum install $missing"
exit 1
fi
else
for command in dot virtualenv doxygen ant ditaa cython; do
if ! command -v "$command" > /dev/null; then
# add a space after old values
missing="${missing:+$missing }$command"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required command, please install them:" 1>&2
echo "$missing" 1>&2
exit 1
fi
fi
# Don't enable -e until after running all the potentially-erroring checks
# for availability of commands
set -e
cat $TOPDIR/src/osd/PeeringState.h $TOPDIR/src/osd/PeeringState.cc | $TOPDIR/doc/scripts/gen_state_diagram.py > $TOPDIR/doc/dev/peering_graph.generated.dot
cd build-doc
[ -z "$vdir" ] && vdir="$TOPDIR/build-doc/virtualenv"
if [ ! -e $vdir ]; then
virtualenv --python=python3 $vdir
fi
$vdir/bin/pip install --quiet -r $TOPDIR/admin/doc-requirements.txt -r $TOPDIR/admin/doc-python-common-requirements.txt
install -d -m0755 \
$TOPDIR/build-doc/output/html \
$TOPDIR/build-doc/output/man
# To avoid having to build librbd to build the Python bindings to build the docs,
# create a dummy librbd.so that allows the module to be imported by sphinx.
# the module are imported by the "automodule::" directive.
mkdir -p $vdir/lib
export LD_LIBRARY_PATH="$vdir/lib"
export PYTHONPATH=$TOPDIR/src/pybind
$vdir/bin/python $TOPDIR/doc/scripts/gen_mon_command_api.py > $TOPDIR/doc/api/mon_command_api.rst
# FIXME(sileht): I dunno how to pass the include-dirs correctly with pip
# for build_ext step, it should be:
# --global-option=build_ext --global-option="--cython-include-dirs $TOPDIR/src/pybind/rados/"
# but that doesn't work, so copying the file in the rbd module directly, that's ok for docs
for bind in rados rbd cephfs rgw; do
if [ ${bind} != rados ]; then
cp -f $TOPDIR/src/pybind/rados/rados.pxd $TOPDIR/src/pybind/${bind}/
fi
ln -sf lib${bind}.so.1 $vdir/lib/lib${bind}.so
gcc -shared -o $vdir/lib/lib${bind}.so.1 -xc /dev/null
ld_flags="-Wl,-rpath,$vdir/lib"
if [ $(uname) != Darwin ]; then
ld_flags="${ld_flags},--no-as-needed"
fi
BUILD_DOC=1 \
CFLAGS="-iquote$TOPDIR/src/include" \
CPPFLAGS="-iquote$TOPDIR/src/include" \
LDFLAGS="-L$vdir/lib ${ld_flags}" \
$vdir/bin/pip install --upgrade $TOPDIR/src/pybind/${bind}
# rgwfile_version(), librgw_create(), rgw_mount()
# since py3.5, distutils adds postfix in between ${bind} and so
lib_fn=$vdir/lib/python*/*-packages/${bind}.*.so
if [ ! -e $lib_fn ]; then
lib_fn=$vdir/lib/python*/*-packages/${bind}.so
fi
if [ ${bind} = "cephfs" ]; then
func_prefix="ceph"
else
func_prefix="(lib)?${bind}"
fi
nm $lib_fn | grep -E "U (_)?${func_prefix}" | \
awk '{ gsub(/^_/,"",$2); print "void "$2"(void) {}" }' | \
gcc -shared -o $vdir/lib/lib${bind}.so.1 -xc -
if [ ${bind} != rados ]; then
rm -f $TOPDIR/src/pybind/${bind}/rados.pxd
fi
done
if [ -z "$@" ]; then
sphinx_targets="html man"
else
sphinx_targets=$@
fi
for target in $sphinx_targets; do
builder=$target
case $target in
html)
builder=dirhtml
extra_opt="-D graphviz_output_format=svg"
;;
man)
extra_opt="-t man"
;;
esac
# Build with -W so that warnings are treated as errors and this fails
$vdir/bin/sphinx-build -W --keep-going -a -b $builder $extra_opt -d doctrees \
$TOPDIR/doc $TOPDIR/build-doc/output/$target
done
# build the releases.json. this reads in the yaml version and dumps
# out the json representation of the same file. the resulting releases.json
# should be served from the root of hosted site.
$vdir/bin/python << EOF > $TOPDIR/build-doc/output/html/releases.json
from __future__ import print_function
import datetime
import json
import yaml
def json_serialize(obj):
if isinstance(obj, datetime.date):
return obj.isoformat()
with open("$TOPDIR/doc/releases/releases.yml", 'r') as fp:
releases = yaml.safe_load(fp)
print(json.dumps(releases, indent=2, default=json_serialize))
EOF
#
# Build and install JavaDocs
#
JAVADIR=$TOPDIR/src/java
# Clean and build JavaDocs
rm -rf $JAVADIR/doc
ant -buildfile $JAVADIR/build.xml docs
# Create clean target directory
JAVA_OUTDIR=$TOPDIR/build-doc/output/html/cephfs/api/libcephfs-java/javadoc
rm -rf $JAVA_OUTDIR
mkdir $JAVA_OUTDIR
# Copy JavaDocs to target directory
cp -a $JAVADIR/doc/* $JAVA_OUTDIR/
echo "SUCCESS"
pcpp
Jinja2
-e../src/python-common
plantweb
git+https://github.com/readthedocs/readthedocs-sphinx-search@master
Sphinx == 3.2.1
git+https://github.com/ceph/sphinx-ditaa.git@py3#egg=sphinx-ditaa
breathe
pyyaml >= 5.1.2
Cython
prettytable
sphinx-autodoc-typehints
sphinx-prompt
Sphinx-Substitution-Extensions
typed-ast
#!/usr/bin/python3
from __future__ import print_function
import http.server
import socketserver
import os
import sys
path = os.path.dirname(sys.argv[0])
os.chdir(path)
os.chdir('..')
os.chdir('build-doc/output/html')
class ReusingTCPServer(http.server.SimpleHTTPRequestHandler):
allow_reuse_address = True
def send_head(self):
# horrible kludge because SimpleHTTPServer is buggy wrt
# slash-redirecting of requests with query arguments, and will
# redirect to /foo?q=bar/ -- wrong slash placement
self.path = self.path.split('?', 1)[0]
return http.server.SimpleHTTPRequestHandler.send_head(self)
httpd = socketserver.TCPServer(
("", 8080),
ReusingTCPServer,
)
try:
print("Serving doc at port: http://localhost:8080")
httpd.serve_forever()
except KeyboardInterrupt:
pass
#!/usr/bin/env bash
#
# File: git-archive-all.sh
#
# Description: A utility script that builds an archive file(s) of all
# git repositories and submodules in the current path.
# Useful for creating a single tarfile of a git super-
# project that contains other submodules.
#
# Examples: Use git-archive-all.sh to create archive distributions
# from git repositories. To use, simply do:
#
# cd $GIT_DIR; git-archive-all.sh
#
# where $GIT_DIR is the root of your git superproject.
#
# License: GPL3
#
###############################################################################
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
###############################################################################
# DEBUGGING
set -e
set -C # noclobber
# TRAP SIGNALS
trap 'cleanup' QUIT EXIT
# For security reasons, explicitly set the internal field separator
# to newline, space, tab
OLD_IFS=$IFS
IFS='
'
function cleanup () {
rm -rf $TMPDIR
IFS="$OLD_IFS"
}
function usage () {
echo "Usage is as follows:"
echo
echo "$PROGRAM <--version>"
echo " Prints the program version number on a line by itself and exits."
echo
echo "$PROGRAM <--usage|--help|-?>"
echo " Prints this usage output and exits."
echo
echo "$PROGRAM [--format <fmt>] [--prefix <path>] [--verbose|-v] [--separate|-s]"
echo " [--tree-ish|-t <tree-ish>] [--ignore pattern] [output_file]"
echo " Creates an archive for the entire git superproject, and its submodules"
echo " using the passed parameters, described below."
echo
echo " If '--format' is specified, the archive is created with the named"
echo " git archiver backend. Obviously, this must be a backend that git archive"
echo " understands. The format defaults to 'tar' if not specified."
echo
echo " If '--prefix' is specified, the archive's superproject and all submodules"
echo " are created with the <path> prefix named. The default is to not use one."
echo
echo " If '--separate' or '-s' is specified, individual archives will be created"
echo " for each of the superproject itself and its submodules. The default is to"
echo " concatenate individual archives into one larger archive."
echo
echo " If '--tree-ish' is specified, the archive will be created based on whatever"
echo " you define the tree-ish to be. Branch names, commit hash, etc. are acceptable."
echo " Defaults to HEAD if not specified. See git archive's documentation for more"
echo " information on what a tree-ish is."
echo
echo " If '--ignore' is specified, we will filter out any submodules that"
echo " match the specified pattern."
echo
echo " If 'output_file' is specified, the resulting archive is created as the"
echo " file named. This parameter is essentially a path that must be writeable."
echo " When combined with '--separate' ('-s') this path must refer to a directory."
echo " Without this parameter or when combined with '--separate' the resulting"
echo " archive(s) are named with a dot-separated path of the archived directory and"
echo " a file extension equal to their format (e.g., 'superdir.submodule1dir.tar')."
echo
echo " If '--verbose' or '-v' is specified, progress will be printed."
}
function version () {
echo "$PROGRAM version $VERSION"
}
# Internal variables and initializations.
readonly PROGRAM=`basename "$0"`
readonly VERSION=0.2
OLD_PWD="`pwd`"
TMPDIR=`mktemp -d "${TMPDIR:-/tmp}/$PROGRAM.XXXXXX"`
TMPFILE=`mktemp "$TMPDIR/$PROGRAM.XXXXXX"` # Create a place to store our work's progress
TOARCHIVE=`mktemp "$TMPDIR/$PROGRAM.toarchive.XXXXXX"`
OUT_FILE=$OLD_PWD # assume "this directory" without a name change by default
SEPARATE=0
VERBOSE=0
TARCMD=tar
[[ $(uname) == "Darwin" ]] && TARCMD=gnutar
FORMAT=tar
PREFIX=
TREEISH=HEAD
IGNORE=
# RETURN VALUES/EXIT STATUS CODES
readonly E_BAD_OPTION=254
readonly E_UNKNOWN=255
# Process command-line arguments.
while test $# -gt 0; do
case $1 in
--format )
shift
FORMAT="$1"
shift
;;
--prefix )
shift
PREFIX="$1"
shift
;;
--separate | -s )
shift
SEPARATE=1
;;
--tree-ish | -t )
shift
TREEISH="$1"
shift
;;
--ignore )
shift
IGNORE="$1"
shift
;;
--version )
version
exit
;;
--verbose | -v )
shift
VERBOSE=1
;;
-? | --usage | --help )
usage
exit
;;
-* )
echo "Unrecognized option: $1" >&2
usage
exit $E_BAD_OPTION
;;
* )
break
;;
esac
done
if [ ! -z "$1" ]; then
OUT_FILE="$1"
shift
fi
# Validate parameters; error early, error often.
if [ $SEPARATE -eq 1 -a ! -d $OUT_FILE ]; then
echo "When creating multiple archives, your destination must be a directory."
echo "If it's not, you risk being surprised when your files are overwritten."
exit
elif [ `git config -l | grep -q '^core\.bare=false'; echo $?` -ne 0 ]; then
echo "$PROGRAM must be run from a git working copy (i.e., not a bare repository)."
exit
fi
# Create the superproject's git-archive
if [ $VERBOSE -eq 1 ]; then
echo -n "creating superproject archive..."
fi
git archive --format=$FORMAT --prefix="$PREFIX" $TREEISH > $TMPDIR/$(basename "$(pwd)").$FORMAT
if [ $VERBOSE -eq 1 ]; then
echo "done"
fi
echo $TMPDIR/$(basename "$(pwd)").$FORMAT >| $TMPFILE # clobber on purpose
superfile=`head -n 1 $TMPFILE`
if [ $VERBOSE -eq 1 ]; then
echo -n "looking for subprojects..."
fi
# find all '.git' dirs, these show us the remaining to-be-archived dirs
# we only want directories that are below the current directory
find . -mindepth 2 -name '.git' -type d -print | sed -e 's/^\.\///' -e 's/\.git$//' >> $TOARCHIVE
# as of version 1.7.8, git places the submodule .git directories under the superprojects .git dir
# the submodules get a .git file that points to their .git dir. we need to find all of these too
find . -mindepth 2 -name '.git' -type f -print | xargs grep -l "gitdir" | sed -e 's/^\.\///' -e 's/\.git$//' >> $TOARCHIVE
if [ -n "$IGNORE" ]; then
cat $TOARCHIVE | grep -v $IGNORE > $TOARCHIVE.new
mv $TOARCHIVE.new $TOARCHIVE
fi
if [ $VERBOSE -eq 1 ]; then
echo "done"
echo " found:"
cat $TOARCHIVE | while read arch
do
echo " $arch"
done
fi
if [ $VERBOSE -eq 1 ]; then
echo -n "archiving submodules..."
fi
while read path; do
TREEISH=$(git submodule | grep "^ .*${path%/} " | cut -d ' ' -f 2) # git submodule does not list trailing slashes in $path
cd "$path"
git archive --format=$FORMAT --prefix="${PREFIX}$path" ${TREEISH:-HEAD} > "$TMPDIR"/"$(echo "$path" | sed -e 's/\//./g')"$FORMAT
if [ $FORMAT == 'zip' ]; then
# delete the empty directory entry; zipped submodules won't unzip if we don't do this
zip -d "$(tail -n 1 $TMPFILE)" "${PREFIX}${path%/}" >/dev/null # remove trailing '/'
fi
echo "$TMPDIR"/"$(echo "$path" | sed -e 's/\//./g')"$FORMAT >> $TMPFILE
cd "$OLD_PWD"
done < $TOARCHIVE
if [ $VERBOSE -eq 1 ]; then
echo "done"
fi
if [ $VERBOSE -eq 1 ]; then
echo -n "concatenating archives into single archive..."
fi
# Concatenate archives into a super-archive.
if [ $SEPARATE -eq 0 ]; then
if [ $FORMAT == 'tar' ]; then
sed -e '1d' $TMPFILE | while read file; do
$TARCMD --concatenate -f "$superfile" "$file" && rm -f "$file"
done
elif [ $FORMAT == 'zip' ]; then
sed -e '1d' $TMPFILE | while read file; do
# zip incorrectly stores the full path, so cd and then grow
cd `dirname "$file"`
zip -g "$superfile" `basename "$file"` && rm -f "$file"
done
cd "$OLD_PWD"
fi
echo "$superfile" >| $TMPFILE # clobber on purpose
fi
if [ $VERBOSE -eq 1 ]; then
echo "done"
fi
if [ $VERBOSE -eq 1 ]; then
echo -n "moving archive to $OUT_FILE..."
fi
while read file; do
mv "$file" "$OUT_FILE"
done < $TMPFILE
if [ $VERBOSE -eq 1 ]; then
echo "done"
fi
MENV_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}" )" && pwd)"
export PATH=${MENV_ROOT}/bin:$PATH
alias mset='source $MENV_ROOT/mset.sh'
case "$TERM" in
xterm-*color)
PS1='\[\033[$MRUN_PROMPT_COLOR;1m\]${MRUN_PROMPT}\[\033[00m\]'${PS1}
;;
*)
PS1='${MRUN_PROMPT}'${PS1}
;;
esac
export MRUN_CEPH_ROOT=$HOME/ceph
ceph-menv
Installation
1. Build links
# assuming ceph build directory is at $HOME/ceph/build
$ cd ceph-menv
$ ./build_links.sh
A different ceph repository can be passed as the first argument to build_links.sh.
2. Configure shell environment
To your shell startup script (such as $HOME/.bashrc) add the following:
source ~/ceph-menv/.menvrc
(modify line appropriately if ceph-menv was installed at a different location)
ceph-menv
Environment assistant for use in conjuction with multiple ceph vstart (or more accurately mstart) clusters. Eliminates the need to specify the cluster that is being used with each and every command. Can provide a shell prompt feedback about the currently used cluster.
Usage:
$ mset <cluster>
For example:
$ mstart.sh c1 -n
$ mset c1
[ c1 ] $ ceph -w
To un-set cluster:
$ mset
#!/bin/bash
DIR=`dirname $0`
ROOT=$1
[ "$ROOT" == "" ] && ROOT="$HOME/ceph"
mkdir -p $DIR/bin
echo $PWD
for f in `ls $ROOT/build/bin`; do
echo $f
ln -sf ../mdo.sh $DIR/bin/$f
done
echo "MRUN_CEPH_ROOT=$ROOT" > $DIR/.menvroot
#!/bin/bash
cmd=`basename $0`
MENV_ROOT=`dirname $0`/..
if [ -f $MENV_ROOT/.menvroot ]; then
. $MENV_ROOT/.menvroot
fi
[ "$MRUN_CEPH_ROOT" == "" ] && MRUN_CEPH_ROOT=$HOME/ceph
if [ "$MRUN_CLUSTER" == "" ]; then
${MRUN_CEPH_ROOT}/build/bin/$cmd "$@"
exit $?
fi
${MRUN_CEPH_ROOT}/src/mrun $MRUN_CLUSTER $cmd "$@"
get_color() {
s=$1
sum=1 # just so that 'c1' isn't green that doesn't contrast with the rest of my prompt
for i in `seq 1 ${#s}`; do
c=${s:$((i-1)):1};
o=`printf '%d' "'$c"`
sum=$((sum+$o))
done
echo $sum
}
if [ "$1" == "" ]; then
unset MRUN_CLUSTER
unset MRUN_PROMPT
else
export MRUN_CLUSTER=$1
export MRUN_PROMPT='['${MRUN_CLUSTER}'] '
col=$(get_color $1)
MRUN_PROMPT_COLOR=$((col%7+31))
fi
此差异已折叠。
#AddCephTest is a module for adding tests to the "make check" target which runs CTest
#adds makes target/script into a test, test to check target, sets necessary environment variables
function(add_ceph_test test_name test_path)
add_test(NAME ${test_name} COMMAND ${test_path} ${ARGN})
if(TARGET ${test_name})
add_dependencies(tests ${test_name})
endif()
set_property(TEST
${test_name}
PROPERTY ENVIRONMENT
CEPH_ROOT=${CMAKE_SOURCE_DIR}
CEPH_BIN=${CMAKE_RUNTIME_OUTPUT_DIRECTORY}
CEPH_LIB=${CMAKE_LIBRARY_OUTPUT_DIRECTORY}
CEPH_BUILD_DIR=${CMAKE_BINARY_DIR}
LD_LIBRARY_PATH=${CMAKE_BINARY_DIR}/lib
PATH=${CMAKE_RUNTIME_OUTPUT_DIRECTORY}:${CMAKE_SOURCE_DIR}/src:$ENV{PATH}
PYTHONPATH=${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/cython_modules/lib.3:${CMAKE_SOURCE_DIR}/src/pybind
CEPH_BUILD_VIRTUALENV=${CEPH_BUILD_VIRTUALENV})
# none of the tests should take more than 1 hour to complete
set_property(TEST
${test_name}
PROPERTY TIMEOUT ${CEPH_TEST_TIMEOUT})
endfunction()
option(WITH_GTEST_PARALLEL "Enable running gtest based tests in parallel" OFF)
if(WITH_GTEST_PARALLEL)
if(NOT TARGET gtest-parallel_ext)
set(gtest_parallel_source_dir ${CMAKE_CURRENT_BINARY_DIR}/gtest-parallel)
include(ExternalProject)
ExternalProject_Add(gtest-parallel_ext
SOURCE_DIR "${gtest_parallel_source_dir}"
GIT_REPOSITORY "https://github.com/google/gtest-parallel.git"
GIT_TAG "master"
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND "")
add_dependencies(tests gtest-parallel_ext)
find_package(Python3 QUIET REQUIRED)
set(GTEST_PARALLEL_COMMAND
${Python3_EXECUTABLE} ${gtest_parallel_source_dir}/gtest-parallel)
endif()
endif()
#sets uniform compiler flags and link libraries
function(add_ceph_unittest unittest_name)
set(UNITTEST "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/${unittest_name}")
# If the second argument is "parallel", it means we want a parallel run
if(WITH_GTEST_PARALLEL AND "${ARGV1}" STREQUAL "parallel")
set(UNITTEST ${GTEST_PARALLEL_COMMAND} ${UNITTEST})
endif()
add_ceph_test(${unittest_name} "${UNITTEST}")
target_link_libraries(${unittest_name} ${UNITTEST_LIBS})
endfunction()
function(add_tox_test name)
set(test_name run-tox-${name})
set(venv_path ${CEPH_BUILD_VIRTUALENV}/${name}-virtualenv)
cmake_parse_arguments(TOXTEST "" "TOX_PATH" "TOX_ENVS" ${ARGN})
if(DEFINED TOXTEST_TOX_PATH)
set(tox_path ${TOXTEST_TOX_PATH})
else()
set(tox_path ${CMAKE_CURRENT_SOURCE_DIR})
endif()
list(APPEND tox_envs py3)
if(DEFINED TOXTEST_TOX_ENVS)
list(APPEND tox_envs ${TOXTEST_TOX_ENVS})
endif()
string(REPLACE ";" "," tox_envs "${tox_envs}")
find_package(Python3 QUIET REQUIRED)
add_custom_command(
OUTPUT ${venv_path}/bin/activate
COMMAND ${CMAKE_SOURCE_DIR}/src/tools/setup-virtualenv.sh --python="${Python3_EXECUTABLE}" ${venv_path}
WORKING_DIRECTORY ${tox_path}
COMMENT "preparing venv for ${name}")
add_custom_target(${name}-venv
DEPENDS ${venv_path}/bin/activate)
add_dependencies(tests ${name}-venv)
add_test(
NAME ${test_name}
COMMAND ${CMAKE_SOURCE_DIR}/src/script/run_tox.sh
--source-dir ${CMAKE_SOURCE_DIR}
--build-dir ${CMAKE_BINARY_DIR}
--tox-path ${tox_path}
--tox-envs ${tox_envs}
--venv-path ${venv_path})
set_property(
TEST ${test_name}
PROPERTY ENVIRONMENT
CEPH_ROOT=${CMAKE_SOURCE_DIR}
CEPH_BIN=${CMAKE_RUNTIME_OUTPUT_DIRECTORY}
CEPH_LIB=${CMAKE_LIBRARY_OUTPUT_DIRECTORY}
CEPH_BUILD_VIRTUALENV=${CEPH_BUILD_VIRTUALENV}
LD_LIBRARY_PATH=${CMAKE_BINARY_DIR}/lib
PATH=${CMAKE_RUNTIME_OUTPUT_DIRECTORY}:${CMAKE_SOURCE_DIR}/src:$ENV{PATH}
PYTHONPATH=${CMAKE_SOURCE_DIR}/src/pybind)
list(APPEND tox_test run-tox-${name})
endfunction()
# This module builds Boost
# executables are. It sets the following variables:
#
# Boost_FOUND : boolean - system has Boost
# Boost_LIBRARIES : list(filepath) - the libraries needed to use Boost
# Boost_INCLUDE_DIRS : list(path) - the Boost include directories
#
# Following hints are respected
#
# Boost_USE_STATIC_LIBS : boolean (default: OFF)
# Boost_USE_MULTITHREADED : boolean (default: OFF)
# BOOST_J: integer (defanult 1)
function(check_boost_version source_dir expected_version)
set(version_hpp "${source_dir}/boost/version.hpp")
if(NOT EXISTS ${version_hpp})
message(FATAL_ERROR "${version_hpp} not found. Please either \"rm -rf ${source_dir}\" "
"so I can download Boost v${expected_version} for you, or make sure ${source_dir} "
"contains a full copy of Boost v${expected_version}.")
endif()
file(STRINGS "${version_hpp}" BOOST_VERSION_LINE
REGEX "^#define[ \t]+BOOST_VERSION[ \t]+[0-9]+$")
string(REGEX REPLACE "^#define[ \t]+BOOST_VERSION[ \t]+([0-9]+)$"
"\\1" BOOST_VERSION "${BOOST_VERSION_LINE}")
math(EXPR BOOST_VERSION_PATCH "${BOOST_VERSION} % 100")
math(EXPR BOOST_VERSION_MINOR "${BOOST_VERSION} / 100 % 1000")
math(EXPR BOOST_VERSION_MAJOR "${BOOST_VERSION} / 100000")
set(version "${BOOST_VERSION_MAJOR}.${BOOST_VERSION_MINOR}.${BOOST_VERSION_PATCH}")
if(version VERSION_LESS expected_version)
message(FATAL_ERROR "Boost v${version} in ${source_dir} is not new enough. "
"Please either \"rm -rf ${source_dir}\" so I can download Boost v${expected_version} "
"for you, or make sure ${source_dir} contains a copy of Boost v${expected_version}.")
else()
message(STATUS "boost (${version} >= ${expected_version}) already in ${source_dir}")
endif()
endfunction()
macro(list_replace list old new)
list(FIND ${list} ${old} where)
if(where GREATER -1)
list(REMOVE_AT ${list} ${where})
list(INSERT ${list} ${where} ${new})
endif()
unset(where)
endmacro()
function(do_build_boost version)
cmake_parse_arguments(Boost_BUILD "" "" COMPONENTS ${ARGN})
set(boost_features "variant=release")
if(Boost_USE_MULTITHREADED)
list(APPEND boost_features "threading=multi")
else()
list(APPEND boost_features "threading=single")
endif()
if(Boost_USE_STATIC_LIBS)
list(APPEND boost_features "link=static")
else()
list(APPEND boost_features "link=shared")
endif()
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
list(APPEND boost_features "address-model=64")
else()
list(APPEND boost_features "address-model=32")
endif()
set(BOOST_CXXFLAGS "-fPIC -w") # check on arm, etc <---XXX
list(APPEND boost_features "cxxflags=${BOOST_CXXFLAGS}")
set(boost_with_libs)
foreach(c ${Boost_BUILD_COMPONENTS})
if(c MATCHES "^python([0-9])\$")
set(with_python_version "${CMAKE_MATCH_1}")
list(APPEND boost_with_libs "python")
elseif(c MATCHES "^python([0-9])\\.?([0-9])\$")
set(with_python_version "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}")
list(APPEND boost_with_libs "python")
else()
list(APPEND boost_with_libs ${c})
endif()
endforeach()
list_replace(boost_with_libs "unit_test_framework" "test")
string(REPLACE ";" "," boost_with_libs "${boost_with_libs}")
# build b2 and prepare the project-config.jam for boost
set(configure_command
./bootstrap.sh --prefix=<INSTALL_DIR>
--with-libraries=${boost_with_libs})
set(b2 ./b2)
if(BOOST_J)
message(STATUS "BUILDING Boost Libraries at j ${BOOST_J}")
list(APPEND b2 -j${BOOST_J})
endif()
# suppress all debugging levels for b2
list(APPEND b2 -d0)
if(CMAKE_CXX_COMPILER_ID STREQUAL GNU)
set(toolset gcc)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL Clang)
set(toolset clang)
else()
message(SEND_ERROR "unknown compiler: ${CMAKE_CXX_COMPILER_ID}")
endif()
set(user_config ${CMAKE_BINARY_DIR}/user-config.jam)
# edit the user-config.jam so b2 will be able to use the specified
# toolset and python
file(WRITE ${user_config}
"using ${toolset}"
" : "
" : ${CMAKE_CXX_COMPILER}"
" ;\n")
if(with_python_version)
find_package(Python3 ${with_python_version} QUIET REQUIRED
COMPONENTS Development)
string(REPLACE ";" " " python3_includes "${Python3_INCLUDE_DIRS}")
file(APPEND ${user_config}
"using python"
" : ${with_python_version}"
" : ${Python3_EXECUTABLE}"
" : ${python3_includes}"
" : ${Python3_LIBRARIES}"
" ;\n")
endif()
list(APPEND b2 --user-config=${user_config})
list(APPEND b2 toolset=${toolset})
if(with_python_version)
list(APPEND b2 python=${with_python_version})
endif()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm|ARM")
list(APPEND b2 abi=aapcs)
list(APPEND b2 architecture=arm)
list(APPEND b2 binary-format=elf)
endif()
if(WITH_BOOST_VALGRIND)
list(APPEND b2 valgrind=on)
endif()
set(build_command
${b2} headers stage
#"--buildid=ceph" # changes lib names--can omit for static
${boost_features})
set(install_command
${b2} install)
set(boost_root_dir "${CMAKE_BINARY_DIR}/boost")
if(EXISTS "${PROJECT_SOURCE_DIR}/src/boost/bootstrap.sh")
check_boost_version("${PROJECT_SOURCE_DIR}/src/boost" ${version})
set(source_dir
SOURCE_DIR "${PROJECT_SOURCE_DIR}/src/boost")
elseif(version VERSION_GREATER 1.73)
message(FATAL_ERROR "Unknown BOOST_REQUESTED_VERSION: ${version}")
else()
message(STATUS "boost will be downloaded...")
# NOTE: If you change this version number make sure the package is available
# at the three URLs below (may involve uploading to download.ceph.com)
set(boost_version 1.73.0)
set(boost_sha256 4eb3b8d442b426dc35346235c8733b5ae35ba431690e38c6a8263dce9fcbb402)
string(REPLACE "." "_" boost_version_underscore ${boost_version} )
set(boost_url
https://dl.bintray.com/boostorg/release/${boost_version}/source/boost_${boost_version_underscore}.tar.bz2)
if(CMAKE_VERSION VERSION_GREATER 3.7)
set(boost_url
"${boost_url} http://downloads.sourceforge.net/project/boost/boost/${boost_version}/boost_${boost_version_underscore}.tar.bz2")
set(boost_url
"${boost_url} https://download.ceph.com/qa/boost_${boost_version_underscore}.tar.bz2")
endif()
set(source_dir
URL ${boost_url}
URL_HASH SHA256=${boost_sha256}
DOWNLOAD_NO_PROGRESS 1)
endif()
# build all components in a single shot
include(ExternalProject)
ExternalProject_Add(Boost
${source_dir}
CONFIGURE_COMMAND CC=${CMAKE_C_COMPILER} CXX=${CMAKE_CXX_COMPILER} ${configure_command}
BUILD_COMMAND CC=${CMAKE_C_COMPILER} CXX=${CMAKE_CXX_COMPILER} ${build_command}
BUILD_IN_SOURCE 1
INSTALL_COMMAND ${install_command}
PREFIX "${boost_root_dir}")
endfunction()
set(Boost_context_DEPENDENCIES thread chrono system date_time)
set(Boost_coroutine_DEPENDENCIES context system)
set(Boost_filesystem_DEPENDENCIES system)
set(Boost_iostreams_DEPENDENCIES regex)
set(Boost_thread_DEPENDENCIES chrono system date_time atomic)
macro(build_boost version)
do_build_boost(${version} ${ARGN})
ExternalProject_Get_Property(Boost install_dir)
set(Boost_INCLUDE_DIRS ${install_dir}/include)
set(Boost_INCLUDE_DIR ${install_dir}/include)
set(Boost_VERSION ${version})
# create the directory so cmake won't complain when looking at the imported
# target
file(MAKE_DIRECTORY ${Boost_INCLUDE_DIRS})
cmake_parse_arguments(Boost_BUILD "" "" COMPONENTS ${ARGN})
foreach(c ${Boost_BUILD_COMPONENTS})
list(APPEND components ${c})
if(Boost_${c}_DEPENDENCIES)
list(APPEND components ${Boost_${c}_DEPENDENCIES})
list(REMOVE_DUPLICATES components)
endif()
endforeach()
set(Boost_BUILD_COMPONENTS ${components})
unset(components)
foreach(c ${Boost_BUILD_COMPONENTS})
string(TOUPPER ${c} upper_c)
if(Boost_USE_STATIC_LIBS)
add_library(Boost::${c} STATIC IMPORTED)
else()
add_library(Boost::${c} SHARED IMPORTED)
endif()
add_dependencies(Boost::${c} Boost)
if(c MATCHES "^python")
set(c "python${Python3_VERSION_MAJOR}${Python3_VERSION_MINOR}")
endif()
if(Boost_USE_STATIC_LIBS)
set(Boost_${upper_c}_LIBRARY
${install_dir}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}boost_${c}${CMAKE_STATIC_LIBRARY_SUFFIX})
else()
set(Boost_${upper_c}_LIBRARY
${install_dir}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}boost_${c}${CMAKE_SHARED_LIBRARY_SUFFIX})
endif()
unset(buildid)
set_target_properties(Boost::${c} PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${Boost_INCLUDE_DIRS}"
IMPORTED_LINK_INTERFACE_LANGUAGES "CXX"
IMPORTED_LOCATION "${Boost_${upper_c}_LIBRARY}")
if((c MATCHES "coroutine|context") AND (WITH_BOOST_VALGRIND))
set_target_properties(Boost::${c} PROPERTIES
INTERFACE_COMPILE_DEFINITIONS "BOOST_USE_VALGRIND")
endif()
list(APPEND Boost_LIBRARIES ${Boost_${upper_c}_LIBRARY})
endforeach()
foreach(c ${Boost_BUILD_COMPONENTS})
if(Boost_${c}_DEPENDENCIES)
foreach(dep ${Boost_${c}_DEPENDENCIES})
list(APPEND dependencies Boost::${dep})
endforeach()
set_target_properties(Boost::${c} PROPERTIES
INTERFACE_LINK_LIBRARIES "${dependencies}")
unset(dependencies)
endif()
set(Boost_${c}_FOUND "TRUE")
endforeach()
# for header-only libraries
add_library(Boost::boost INTERFACE IMPORTED)
set_target_properties(Boost::boost PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${Boost_INCLUDE_DIRS}")
add_dependencies(Boost::boost Boost)
find_package_handle_standard_args(Boost DEFAULT_MSG
Boost_INCLUDE_DIRS Boost_LIBRARIES)
mark_as_advanced(Boost_LIBRARIES BOOST_INCLUDE_DIRS)
endmacro()
function(maybe_add_boost_dep target)
get_target_property(type ${target} TYPE)
if(NOT type MATCHES "OBJECT_LIBRARY|STATIC_LIBRARY|SHARED_LIBRARY|EXECUTABLE")
return()
endif()
get_target_property(sources ${target} SOURCES)
string(GENEX_STRIP "${sources}" sources)
foreach(src ${sources})
get_filename_component(ext ${src} EXT)
# assuming all cxx source files include boost header(s)
if(ext MATCHES ".cc|.cpp|.cxx")
add_dependencies(${target} Boost::boost)
return()
endif()
endforeach()
endfunction()
# override add_library() to add Boost headers dependency
function(add_library target)
_add_library(${target} ${ARGN})
# can't add dependencies to aliases or imported libraries
if (NOT ";${ARGN};" MATCHES ";(ALIAS|IMPORTED);")
maybe_add_boost_dep(${target})
endif()
endfunction()
function(add_executable target)
_add_executable(${target} ${ARGN})
maybe_add_boost_dep(${target})
endfunction()
function(do_build_dpdk dpdk_dir)
# mk/machine/native/rte.vars.mk
# rte_cflags are extracted from mk/machine/${machine}/rte.vars.mk
# only 3 of them have -march=<arch> defined, so copying them here.
# we need to pass the -march=<arch> to ${cc} as some headers in dpdk
# require it to compile. for instance, dpdk/include/rte_memcpy.h.
if(CMAKE_SYSTEM_PROCESSOR MATCHES "i386")
set(arch "x86_64")
set(machine "default")
set(machine_tmpl "native")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686")
set(arch "i686")
set(machine "default")
set(machine_tmpl "native")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64|AMD64")
set(arch "x86_64")
set(machine "default")
set(machine_tmpl "native")
set(rte_cflags "-march=core2")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "arm|ARM")
set(arch "arm")
set(machine "armv7a")
set(machine_tmpl "armv7a")
set(rte_cflags "-march=armv7-a")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
set(arch "arm64")
set(machine "armv8a")
set(machine_tmpl "armv8a")
set(rte_cflags "-march=armv8-a+crc")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "(powerpc|ppc)64")
set(arch "ppc_64")
set(machine "power8")
set(machine_tmpl "power8")
else()
message(FATAL_ERROR "not able to build DPDK support: "
"unknown arch \"${CMAKE_SYSTEM_PROCESSOR}\"")
endif()
set(dpdk_rte_CFLAGS "${rte_cflags}" CACHE INTERNAL "")
if(CMAKE_SYSTEM_NAME MATCHES "Linux")
set(execenv "linux")
elseif(CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
set(execenv "freebsd")
else()
message(FATAL_ERROR "not able to build DPDK support: "
"unsupported OS \"${CMAKE_SYSTEM_NAME}\"")
endif()
if(CMAKE_C_COMPILER_ID STREQUAL GNU)
set(toolchain "gcc")
elseif(CMAKE_C_COMPILER_ID STREQUAL Clang)
set(toolchain "clang")
elseif(CMAKE_C_COMPILER_ID STREQUAL Intel)
set(toolchain "icc")
else()
message(FATAL_ERROR "not able to build DPDK support: "
"unknown compiler \"${CMAKE_C_COMPILER_ID}\"")
endif()
set(target "${arch}-${machine_tmpl}-${execenv}-${toolchain}")
include(FindMake)
find_make("MAKE_EXECUTABLE" "make_cmd")
execute_process(
COMMAND ${MAKE_EXECUTABLE} showconfigs
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/src/spdk/dpdk
OUTPUT_VARIABLE supported_targets
OUTPUT_STRIP_TRAILING_WHITESPACE)
string(REPLACE "\n" ";" supported_targets "${supported_targets}")
list(FIND supported_targets ${target} found)
if(found EQUAL -1)
message(FATAL_ERROR "not able to build DPDK support: "
"unsupported target. "
"\"${target}\" not listed in ${supported_targets}")
endif()
if(Seastar_DPDK AND WITH_SPDK)
message(FATAL_ERROR "not able to build DPDK with "
"both Seastar_DPDK and WITH_SPDK enabled")
elseif(Seastar_DPDK)
set(dpdk_source_dir ${CMAKE_SOURCE_DIR}/src/seastar/dpdk)
else() # WITH_SPDK or WITH_DPDK is enabled
set(dpdk_source_dir ${CMAKE_SOURCE_DIR}/src/spdk/dpdk)
endif()
include(ExternalProject)
ExternalProject_Add(dpdk-ext
SOURCE_DIR ${dpdk_source_dir}
CONFIGURE_COMMAND ${make_cmd} config O=${dpdk_dir} T=${target}
BUILD_COMMAND ${make_cmd} O=${dpdk_dir} CC=${CMAKE_C_COMPILER} EXTRA_CFLAGS=-fPIC
BUILD_IN_SOURCE 1
INSTALL_COMMAND "true")
if(NUMA_FOUND)
set(numa "y")
else()
set(numa "n")
endif()
ExternalProject_Add_Step(dpdk-ext patch-config
COMMAND ${CMAKE_MODULE_PATH}/patch-dpdk-conf.sh ${dpdk_dir} ${machine} ${arch} ${numa}
DEPENDEES configure
DEPENDERS build)
# easier to adjust the config
ExternalProject_Add_StepTargets(dpdk-ext configure patch-config build)
endfunction()
function(do_export_dpdk dpdk_dir)
set(DPDK_INCLUDE_DIR ${dpdk_dir}/include)
# create the directory so cmake won't complain when looking at the imported
# target
file(MAKE_DIRECTORY ${DPDK_INCLUDE_DIR})
if(NOT TARGET dpdk::cflags)
add_library(dpdk::cflags INTERFACE IMPORTED)
if (dpdk_rte_CFLAGS)
set_target_properties(dpdk::cflags PROPERTIES
INTERFACE_COMPILE_OPTIONS "${dpdk_rte_CFLAGS}")
endif()
endif()
list(APPEND dpdk_components
bus_pci
eal
kvargs
mbuf
mempool
mempool_ring
pci
ring
telemetry)
if(Seastar_DPDK)
list(APPEND dpdk_components
bus_vdev
cfgfile
hash
net
pmd_bnxt
pmd_cxgbe
pmd_e1000
pmd_ena
pmd_enic
pmd_i40e
pmd_ixgbe
pmd_nfp
pmd_qede
pmd_ring
pmd_sfc_efx
timer)
endif()
foreach(c ${dpdk_components})
add_library(dpdk::${c} STATIC IMPORTED)
add_dependencies(dpdk::${c} dpdk-ext)
set(dpdk_${c}_LIBRARY
"${dpdk_dir}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}rte_${c}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set_target_properties(dpdk::${c} PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES ${DPDK_INCLUDE_DIR}
INTERFACE_LINK_LIBRARIES dpdk::cflags
IMPORTED_LOCATION "${dpdk_${c}_LIBRARY}")
list(APPEND DPDK_LIBRARIES dpdk::${c})
list(APPEND DPDK_ARCHIVES "${dpdk_${c}_LIBRARY}")
endforeach()
if(NUMA_FOUND)
set(dpdk_numa " -Wl,-lnuma")
endif()
add_library(dpdk::dpdk INTERFACE IMPORTED)
add_dependencies(dpdk::dpdk
${DPDK_LIBRARIES})
# workaround for https://gitlab.kitware.com/cmake/cmake/issues/16947
set_target_properties(dpdk::dpdk PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES ${DPDK_INCLUDE_DIR}
INTERFACE_LINK_LIBRARIES
"-Wl,--whole-archive $<JOIN:${DPDK_ARCHIVES}, > -Wl,--no-whole-archive ${dpdk_numa} -Wl,-lpthread,-ldl")
if(dpdk_rte_CFLAGS)
set_target_properties(dpdk::dpdk PROPERTIES
INTERFACE_COMPILE_OPTIONS "${dpdk_rte_CFLAGS}")
endif()
endfunction()
function(build_dpdk dpdk_dir)
find_package(NUMA QUIET)
if(NOT TARGET dpdk-ext)
do_build_dpdk(${dpdk_dir})
endif()
if(NOT TARGET dpdk::dpdk)
do_export_dpdk(${dpdk_dir})
endif()
endfunction()
function(build_fio)
# we use an external project and copy the sources to bin directory to ensure
# that object files are built outside of the source tree.
include(ExternalProject)
if(ALLOCATOR)
set(FIO_EXTLIBS EXTLIBS=-l${ALLOCATOR})
endif()
ExternalProject_Add(fio_ext
DOWNLOAD_DIR ${CMAKE_BINARY_DIR}/src/
UPDATE_COMMAND "" # this disables rebuild on each run
GIT_REPOSITORY "https://github.com/axboe/fio.git"
GIT_CONFIG advice.detachedHead=false
GIT_SHALLOW 1
GIT_TAG "fio-3.15"
SOURCE_DIR ${CMAKE_BINARY_DIR}/src/fio
BUILD_IN_SOURCE 1
CONFIGURE_COMMAND <SOURCE_DIR>/configure
BUILD_COMMAND $(MAKE) fio EXTFLAGS=-Wno-format-truncation ${FIO_EXTLIBS}
INSTALL_COMMAND cp <BINARY_DIR>/fio ${CMAKE_BINARY_DIR}/bin)
endfunction()
##
# Make file for QAT linux driver project
##
set(qatdrv_root_dir "${CMAKE_BINARY_DIR}/qatdrv")
set(qatdrv_url "https://01.org/sites/default/files/downloads/intelr-quickassist-technology/qat1.7.l.4.2.0-00012.tar.gz")
set(qatdrv_url_hash "SHA256=47990b3283ded748799dba42d4b0e1bdc0be3cf3978bd587533cd12788b03856")
set(qatdrv_config_args "--enable-qat-uio")
include(ExternalProject)
ExternalProject_Add(QatDrv
URL ${qatdrv_url}
URL_HASH ${qatdrv_url_hash}
CONFIGURE_COMMAND ${qatdrv_env} ./configure ${qatdrv_config_args}
# Temporarily forcing single thread as multi-threaded make is causing build
# failues.
BUILD_COMMAND make -j1 quickassist-all
BUILD_IN_SOURCE 1
INSTALL_COMMAND ""
TEST_COMMAND ""
PREFIX ${qatdrv_root_dir})
set(QatDrv_INCLUDE_DIRS
${qatdrv_root_dir}/src/QatDrv/quickassist/include
${qatdrv_root_dir}/src/QatDrv/quickassist/lookaside/access_layer/include
${qatdrv_root_dir}/src/QatDrv/quickassist/include/lac
${qatdrv_root_dir}/src/QatDrv/quickassist/utilities/libusdm_drv
${qatdrv_root_dir}/src/QatDrv/quickassist/utilities/libusdm_drv/linux/include)
set(QatDrv_LIBRARIES
${qatdrv_root_dir}/src/QatDrv/build/libqat_s.so
${qatdrv_root_dir}/src/QatDrv/build/libusdm_drv_s.so)
function(build_rocksdb)
set(rocksdb_CMAKE_ARGS -DCMAKE_POSITION_INDEPENDENT_CODE=ON)
list(APPEND rocksdb_CMAKE_ARGS -DWITH_GFLAGS=OFF)
# cmake doesn't properly handle arguments containing ";", such as
# CMAKE_PREFIX_PATH, for which reason we'll have to use some other separator.
string(REPLACE ";" "!" CMAKE_PREFIX_PATH_ALT_SEP "${CMAKE_PREFIX_PATH}")
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_PREFIX_PATH=${CMAKE_PREFIX_PATH_ALT_SEP})
if(CMAKE_TOOLCHAIN_FILE)
list(APPEND rocksdb_CMAKE_ARGS
-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE})
endif()
if(ALLOCATOR STREQUAL "jemalloc")
list(APPEND rocksdb_CMAKE_ARGS -DWITH_JEMALLOC=ON)
list(APPEND rocksdb_INTERFACE_LINK_LIBRARIES JeMalloc::JeMalloc)
endif()
if (WITH_CCACHE AND CCACHE_FOUND)
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_CXX_COMPILER_LAUNCHER=ccache)
endif()
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER})
list(APPEND rocksdb_CMAKE_ARGS -DWITH_SNAPPY=${SNAPPY_FOUND})
if(SNAPPY_FOUND)
list(APPEND rocksdb_INTERFACE_LINK_LIBRARIES snappy::snappy)
endif()
# libsnappy is a C++ library, we need to force rocksdb to link against
# libsnappy statically.
if(SNAPPY_FOUND AND WITH_STATIC_LIBSTDCXX)
list(APPEND rocksdb_CMAKE_ARGS -DWITH_SNAPPY_STATIC_LIB=ON)
endif()
list(APPEND rocksdb_CMAKE_ARGS -DWITH_LZ4=${LZ4_FOUND})
if(LZ4_FOUND)
list(APPEND rocksdb_INTERFACE_LINK_LIBRARIES LZ4::LZ4)
# When cross compiling, cmake may fail to locate lz4.
list(APPEND rocksdb_CMAKE_ARGS -Dlz4_INCLUDE_DIRS=${LZ4_INCLUDE_DIR})
list(APPEND rocksdb_CMAKE_ARGS -Dlz4_LIBRARIES=${LZ4_LIBRARY})
endif()
list(APPEND rocksdb_CMAKE_ARGS -DWITH_ZLIB=${ZLIB_FOUND})
if(ZLIB_FOUND)
list(APPEND rocksdb_INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif()
list(APPEND rocksdb_CMAKE_ARGS -DPORTABLE=ON)
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_AR=${CMAKE_AR})
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE})
list(APPEND rocksdb_CMAKE_ARGS -DFAIL_ON_WARNINGS=OFF)
list(APPEND rocksdb_CMAKE_ARGS -DUSE_RTTI=1)
list(APPEND rocksdb_CMAKE_ARGS -G${CMAKE_GENERATOR})
CHECK_C_COMPILER_FLAG("-Wno-stringop-truncation" HAS_WARNING_STRINGOP_TRUNCATION)
if(HAS_WARNING_STRINGOP_TRUNCATION)
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_C_FLAGS=-Wno-stringop-truncation)
endif()
include(CheckCXXCompilerFlag)
check_cxx_compiler_flag("-Wno-deprecated-copy" HAS_WARNING_DEPRECATED_COPY)
if(HAS_WARNING_DEPRECATED_COPY)
set(rocksdb_CXX_FLAGS -Wno-deprecated-copy)
endif()
check_cxx_compiler_flag("-Wno-pessimizing-move" HAS_WARNING_PESSIMIZING_MOVE)
if(HAS_WARNING_PESSIMIZING_MOVE)
set(rocksdb_CXX_FLAGS "${rocksdb_CXX_FLAGS} -Wno-pessimizing-move")
endif()
if(rocksdb_CXX_FLAGS)
list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_CXX_FLAGS='${rocksdb_CXX_FLAGS}')
endif()
# we use an external project and copy the sources to bin directory to ensure
# that object files are built outside of the source tree.
include(ExternalProject)
set(rocksdb_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/rocksdb")
set(rocksdb_BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/rocksdb")
set(rocksdb_LIBRARY "${rocksdb_BINARY_DIR}/librocksdb.a")
if(CMAKE_MAKE_PROGRAM MATCHES "make")
# try to inherit command line arguments passed by parent "make" job
set(make_cmd $(MAKE) rocksdb)
else()
set(make_cmd ${CMAKE_COMMAND} --build <BINARY_DIR> --target rocksdb)
endif()
ExternalProject_Add(rocksdb_ext
SOURCE_DIR "${rocksdb_SOURCE_DIR}"
CMAKE_ARGS ${rocksdb_CMAKE_ARGS}
BINARY_DIR "${rocksdb_BINARY_DIR}"
BUILD_COMMAND "${make_cmd}"
BUILD_ALWAYS TRUE
BUILD_BYPRODUCTS "${rocksdb_LIBRARY}"
INSTALL_COMMAND "true"
LIST_SEPARATOR !)
add_library(RocksDB::RocksDB STATIC IMPORTED)
add_dependencies(RocksDB::RocksDB rocksdb_ext)
set(rocksdb_INCLUDE_DIR "${rocksdb_SOURCE_DIR}/include")
foreach(ver "MAJOR" "MINOR" "PATCH")
file(STRINGS "${rocksdb_INCLUDE_DIR}/rocksdb/version.h" ROCKSDB_VER_${ver}_LINE
REGEX "^#define[ \t]+ROCKSDB_${ver}[ \t]+[0-9]+$")
string(REGEX REPLACE "^#define[ \t]+ROCKSDB_${ver}[ \t]+([0-9]+)$"
"\\1" ROCKSDB_VERSION_${ver} "${ROCKSDB_VER_${ver}_LINE}")
unset(ROCKDB_VER_${ver}_LINE)
endforeach()
set(rocksdb_VERSION_STRING
"${ROCKSDB_VERSION_MAJOR}.${ROCKSDB_VERSION_MINOR}.${ROCKSDB_VERSION_PATCH}")
set_target_properties(RocksDB::RocksDB PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${rocksdb_INCLUDE_DIR}"
INTERFACE_LINK_LIBRARIES "${rocksdb_INTERFACE_LINK_LIBRARIES}"
IMPORTED_LINK_INTERFACE_LANGUAGES "CXX"
IMPORTED_LOCATION "${rocksdb_LIBRARY}"
VERSION "${rocksdb_VERSION_STRING}")
endfunction()
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
{};
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册