提交 285390b6 编写于 作者: O oceanbase-admin

init push

上级
# remove spec
rm -f ob-deploy.spec
# push to oceanbase-ce-publish/obdeploy
rm -rf .git && git init
git remote add origin git@gitlab.alibaba-inc.com:oceanbase-ce-publish/obdeploy
git add -f . && git commit -m "init push"
git push -f origin master
\ No newline at end of file
nohup.out
*.pyc
*.pyo
build
dist
.vscode
.git
__pycache__
.idea/workspace.xml
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
This file is part of OceanBase Deploy.
Copyright (C) 2021 OceanBase
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
OceanBase Deploy. Copyright (C) 2021 OceanBase
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
# OceanBase Deploy
<!--
#
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
#
-->
<!-- TODO: some badges here -->
**OceanBase Deploy** (简称 OBD)是 OceanBase 开源软件的安装部署工具。OBD 同时也是包管理器,可以用来管理 OceanBase 所有的开源软件。本文介绍如何安装 OBD、使用 OBD 和 OBD 的命令。
## 安装 OBD
您可以使用以下方式安装 OBD:
### 方案1: 使用 RPM 包(Centos 7 及以上)安装。
```shell
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
sudo yum install -y ob-deploy
source /etc/profile.d/obd.sh
```
### 方案2:使用源码安装。
使用源码安装 OBD 之前,请确认您已安装以下依赖:
- gcc
- python-devel
- openssl-devel
- xz-devel
- mysql-devel
Python2 使用以下命令安装:
```shell
pip install -r requirements.txt
sh build.sh
source /etc/profile.d/obd.sh
```
Python3 使用以下命令安装:
```shell
pip install -r requirements3.txt
sh build.sh
source /etc/profile.d/obd.sh
```
## 快速启动 OceanBase 数据库
安装 OBD 后,您可以使用 root 用户执行这组命令快速启动本地单节点 OceanBase 数据库。
在此之前您需要确认以下信息:
- 当前用户为 root。
- `2882``2883` 端口没有被占用。
- 您的机器内存应该不低于 8G。
- 您的机器 CPU 数目应该不低于 2。
> **说明:** 如果以上条件不满足,请移步[使用 OBD 启动 OceanBase 数据库集群](#使用-obd-启动-oceanbase-数据库集群)。
```shell
obd cluster deploy c1 -c ./example/mini-local-example.yaml
obd cluster start c1
# 使用 mysql 客户端链接到到 OceanBase 数据库。
mysql -h127.1 -uroot -P2883
```
## 使用 OBD 启动 OceanBase 数据库集群
按照以下步骤启动 OceanBase 数据库集群:
### 第 1 步. 选择配置文件
根据您的资源条件选择正确的配置文件:
#### 小规格开发模式
适用于个人设备(内存不低于 8G)。
- [本地单节点配置样例](./example/mini-local-example.yaml)
- [单节点配置样例](./example/mini-single-example.yaml)
- [三节点配置样例](./example/mini-distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/mini-single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/mini-distributed-with-obproxy-example.yaml)
#### 专业开发模式
适用于高配置 ECS 或物理服务器(不低于 16 核 64G 内存)。
- [本地单节点配置样例](./example/local-example.yaml)
- [单节点配置样例](./example/single-example.yaml)
- [三节点配置样例](./example/distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/distributed-with-obproxy-example.yaml)
本文以 [小规格开发模式-本地单节点](./example/mini-local-example.yaml) 为例,启动一个本地单节点的 OceanBase 数据库。
```shell
# 修改 home_path, 这是 OceanBase 数据库的工作目录。
# 修改 mysql_port,这是 OceanBase 数据库 SQL 服务协议端口号。后续将使用此端口连接数据库。
# 修改 rpc_port,这是 OceanBase 数据库集群内部通信的端口号。
vi ./example/mini-local-example.yaml
```
如果您的目标机器(OceanBase 数据库程序运行的机器)不是当前机器,请不要使用 `本地单节点配置样例`,改用其他样例。
同时您还需要修改配置文件顶部的用户密码信息。
```yaml
user:
username: <您的账号名>
password: <您的登录密码>
key_file: <您的私钥路径>
```
`username` 为登录到目标机器的用户名,确保您的用户名有 `home_path` 的写权限。`password``key_file`都是用于验证改用户的方式,通常情况下只需要填写一个。
> **注意:** 在配置秘钥路径后,如果您的秘钥不需要口令,请注释或者删掉`password`,以免`password`被视为秘钥口令用于登录,导致校验失败。
### 第 2 步. 部署和启动数据库
```shell
# 此命令会检查 home_path 和 data_dir 指向的目录是否为空。
# 若目录不为空,则报错。此时可以加上 -f 选项,强制清空。
obd cluster deploy lo -c local-example.yaml
# 此命令会检查系统参数 fs.aio-max-nr 是否不小于 1048576。
# 通常情况下一台机器启动一个节点不需要修改 fs.aio-max-nr。
# 当一台机器需要启动 4 个及以上的节点时,请务必修改 fs.aio-max-nr。
obd cluster start lo
```
### 第 3 步. 查看集群状态
```shell
# 参看obd管理的集群列表
obd cluster list
# 查看 lo 集群状态
obd cluster disply lo
```
### 第 4 步. 修改配置
OceanBase 数据库有数百个配置项,有些配置是耦合的,在您熟悉 OceanBase 数据库之前,不建议您修改示例配件文件中的配置。此处示例用来说明如何修改配置,并使之生效。
```shell
# 使用 edit-config 命令进入编辑模式,修改集群配置
obd cluster edit-config lo
# 修改 sys_bkgd_migration_retry_num 为 5
# 注意 sys_bkgd_migration_retry_num 值最小为 3
# 保存并退出后,obd 会告知您如何使得此次改动生效
# 此配置项仅需要 reload 即可生效
obd cluster reload lo
```
### 第 5 步. 停止集群
`stop` 命令用于停止一个运行中的集群。如果 `start` 命令执行失败,但有进程没有退出,请使用 `destroy` 命令。
```shell
obd cluster stop lo
```
### 第 6 步. 销毁集群
运行以下命令销毁集群:
```shell
# 启动集群时失败,可以能会有一些进程停留。
# 此时可用 -f 选项强制停止并销毁集群
obd cluster destroy lo
```
## 其他 OBD 命令
**OBD** 有多级命令,您可以在每个层级中使用 `-h/--help` 选项查看该子命令的帮助信息。
### 镜像和仓库命令组
#### `obd mirror clone`
将本地 RPM 包添加为镜像,之后您可以使用 **OBD 集群** 中相关的命令中启动镜像。
```shell
obd mirror clone <path> [-f]
```
参数 `path` 为 RPM 包的路径。
选项 `-f``--force``-f` 为可选选项。默认不开启。开启时,当镜像已经存在时,强制覆盖已有镜像。
#### `obd mirror create`
以本地目录为基础创建一个镜像。此命令主要用于使用 OBD 启动自行编译的 OceanBase 开源软件,您可以通过此命令将编译产物加入本地仓库,之后就可以使用 `obd cluster` 相关的命令启动它。
```shell
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
```
例如您根据 [OceanBase 编译指导书](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/get-the-oceanbase-database-by-using-source-code)编译成功后,可以使用 `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` 将编译产物加入OBD本地仓库。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 说明
--- | --- | --- |---
-n/--name | 是 | string | 组件名。如果您编译的是 OceanBase 数据库,则填写 oceanbase-ce。如果您编译的是 ODP,则填写 obproxy。
-p/--path | 是 | string | 编译目录。执行编译命令时的目录。OBD 会根据组件自动从该目录下获取所需的文件。
-V/--version | 是 | string | 版本号
-t/--tag | 否 | string | 镜像标签。您可以为您的创建的镜像定义多个标签,以英文逗号(,)间隔。
-f/--force | 否 | bool | 当镜像已存在,或者标签已存在时强制覆盖。默认不开启。
#### `obd mirror list`
显示镜像仓库或镜像列表
```shell
obd mirror list [mirror repo name]
```
参数 `mirror repo name` 为 镜像仓库名。该参数为可选参数。不填时,将显示镜像仓库列表。不为空时,则显示对应仓库的镜像列表。
#### `obd mirror update`
同步全部远程镜像仓库的信息
```shell
obd mirror update
```
### 集群命令组
OBD 集群命令操作的最小单位为一个部署配置。部署配置是一份 `yaml` 文件,里面包含各个整个部署的全部配置信息,包括服务器登录信息、组件信息、组件配置信息和组件服务器列表等。
在使用 OBD 启动一个集群之前,您需要先注册这个集群的部署配置到 OBD 中。您可以使用 `obd cluster edit-config` 创建一个空的部署配置,或使用 `obd cluster deploy -c config` 导入一个部署配置。
#### `obd cluster edit-config`
修改一个部署配置,当部署配置不存在时创建。
```shell
obd cluster edit-config <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster deploy`
根据配置部署集群。此命令会根据部署配置文件中组件的信息查找合适的镜像,并安装到本地仓库,此过程称为本地安装。
在将本地仓库中存在合适版本的组件分发给目标服务器,此过程称为远程安装。
在本地安装和远程安装时都会检查服务器是否存在组件运行所需的依赖。
此命令可以直接使用 OBD 中已注册的 `deploy name` 部署,也可以通过传入 `yaml` 的配置信息。
```shell
obd cluster deploy <deploy name> [-c <yaml path>] [-f] [-U]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- |---
-c/--config | 否 | string | 无 | 使用指定的 yaml 文件部署,并将部署配置注册到 OBD 中。<br>`deploy name` 存在时覆盖配置。<br>如果不使用该选项,则会根据 `deploy name` 查找已注册到OBD中的配置信息。
-f/--force | 否 | bool | false | 开启时,强制清空工作目录。<br>当组件要求工作目录为空且不使用改选项时,工作目录不为空会返回错误。
-U/--ulp/ --unuselibrepo | 否 | bool | false | 使用该选项将禁止 OBD 自动处理依赖。不开启的情况下,OBD 将在检查到缺失依赖时搜索相关的 libs 镜像并安装。使用该选项将会在对应的配置文件中天 **unuse_lib_repository: true**。也可以在配置文件中使用 **unuse_lib_repository: true** 开启。
#### `obd cluster start`
启动已部署的集群,成功时打印集群状态。
```shell
obd cluster start <deploy name> [-s]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-s``--strict-check`。部分组件在启动前会做相关的检查,当检查不通过的时候会报警告,不会强制停止流程。使用该选项可开启检查失败报错直接退出。建议开启,可以避免一些资源不足导致的启动失败。非必填项。数据类型为 `bool`。默认不开启。
#### `obd cluster list`
显示当前 OBD 内注册的全部集群(deploy name)的状态。
```shell
obd cluster list
```
#### `obd cluster display`
展示指定集群的状态。
```shell
obd cluster display <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster reload`
重载一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `reload` 命令应用修改。
需要注意的是,并非全部的配置项都可以通过 `reload` 来应用。有些配置项需要重启集群,甚至是重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster reload <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster restart`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `restart` 命令应用修改。
> **注意:** 并非所有的配置项都可以通过 `restart` 来应用。有些配置项需要重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster restart <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster redeploy`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `redeploy` 命令应用修改。
> **注意:** 该命令会销毁集群,重新部署,您集群中的数据会丢失,请先做好备份。
```shell
obd cluster redeploy <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster stop`
停止一个运行中的集群。
```shell
obd cluster stop <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster destroy`
销毁已部署的集群。如果集群处于运行中的状态,该命令会先尝试执行`stop`,成功后再执行`destroy`
```shell
obd cluster destroy <deploy name> [-f]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-f``--force-kill`。检查到工作目录下有运行中的进程时,强制停止。销毁前会做检查是有还有进程在运行中。这些运行中的进程可能是 **start** 失败留下的,也可能是因为配置与其他集群重叠,进程是其他集群的。但无论是哪个原因导致工作目录下有进程未退出,**destroy** 都会直接停止。使用该选项会强制停止这些运行中的进程,强制执行 **destroy**。非必填项。数据类型为 `bool`。默认不开启。
### 测试命令组
#### `obd test mysqltest`
对 OcecanBase 数据库或 ODP 组件的指定节点执行 mysqltest。mysqltest 需要 OBClient,请先安装 OBClient。
```shell
obd test mysqltest <deploy name> [--test-set <test-set>] [flags]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- | ---
-c/--component | 否 | string | 默认为空 | 待测试的组件名。候选项为 oceanbase-ce 和 obproxy。为空时,按 obproxy、oceanbase-ce 的顺序进行检查。检查到组件存在则不再遍历,使用命中的组件进行后续测试。
--test-server | 否 | string | 默指定的组件下服务器中的第一个节点。 | 必须是指定的组件下的某个节点名。
--user | 否 | string | root | 执行测试的用户名。
---password | 否 | string | 默认为空 | 执行测试的用户密码。
--mysqltest-bin | 否 | string | mysqltest | 指定的路径不可执行时使用 OBD 自带的 mysqltest。
--obclient-bin | 否 | string | obclient | OBClient 二进制文件所在目录。
--test-dir | 否 | string | ./mysql_test/t | mysqltest 所需的 **test-file** 存放的目录。test 文件找不到时会尝试在 OBD 内置中查找。
--result-dir | 否 | string | ./mysql_test/r | mysqltest 所需的 **result-file** 存放的目录。result 文件找不到时会尝试在 OBD 内置中查找。
--tmp-dir | 否 | string | ./tmp | 为 mysqltest tmpdir 选项。
--var-dir | 否 | string | ./var | 将在该目录下创建log目录并作为 logdir 选项传入 mysqltest。
--test-set | 否 | string | 无 | test case 数组。多个数组使用英文逗号(,)间隔。
--test-pattern | 否 | string | 无| test 文件名匹配的正则表达式。所有匹配表达式的case将覆盖test-set选项。
--suite | 否 | string | 无 | suite 数组。一个 suite 下包含多个 test。可以使用英文逗号(,)间隔。使用该选项后 --test-pattern 和 --test-set 都将失效。
--suite-dir | 否 | string | ./mysql_test/test_suite | 存放 suite 目录的目录。suite 目录找不到时会尝试在 OBD 内置中查找。
--all | 否 | bool | false | 执行 --suite-dir 下全部的 case。存放 suite 目录的目录。
--need-init | 否 | bool | false | 执行init sql 文件。一个新的集群要执行 mysqltest 前可能需要执行一些初始化文件,比如创建 case 所需要的账号和租户等。存放 suite 目录的目录。默认不开启。
--init-sql-dir | 否 | string | ../ | init sql 文件所在目录。sql 文件找不到时会尝试在obd内置中查找。
--init-sql-files | 否 | string | | 需要 init 时执行的 init sql 文件数组。英文逗号(,)间隔。不填时,如果需要 init,OBD 会根据集群配置执行内置的 init。
--auto-retry | 否 | bool | false | 失败时自动重部署集群进行重试。
## Q&A
### Q: 如何指定使用组件的版本?
A: 在部署配置文件中使用版本声明。例如,如果您使用的是 OceanBase-CE 3.1.0 版本,可以指定以下配置:
```yaml
oceanbase-ce:
version: 3.1.0
```
### Q: 如何指定使用特定版本的组件?
A: 在部署配置文件中使用 package_hash 或 tag 声明。
如果您给自己编译的 OceanBase-CE 设置了t ag,您可以使用 tag 来指定。如:
```yaml
oceanbase-ce:
tag: my-oceanbase
```
您也可以通过 package_hash 来指定特定的版本。当您使用 `obd mirror` 相关命令时会打印出组件的 md5 值,这个值即为 package_hash。
```yaml
oceanbase-ce:
package_hash: 929df53459404d9b0c1f945e7e23ea4b89972069
```
### Q:我修改了 OceanBase-CE 了代码,需要修改启动流程怎么办?
A:您可以修改 `~/.obd/plugins/oceanbase-ce/` 下的启动相关插件。比如您为 3.1.0 版本的 OceanBase-CE 添加了一个新的启动配置,可以修改 ``~/.obd/plugins/oceanbase-ce/3.1.0/start.py``
## 协议
OBD 采用 [GPL-3.0](./LICENSE) 协议。
# OceanBase Deploy
<!--
#
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
#
-->
<!-- TODO: some badges here -->
**OceanBase Deploy** (简称 OBD)是 OceanBase 开源软件的安装部署工具。OBD 同时也是包管理器,可以用来管理 OceanBase 所有的开源软件。本文介绍如何安装 OBD、使用 OBD 和 OBD 的命令。
## 安装 OBD
您可以使用以下方式安装 OBD:
### 方案1: 使用 RPM 包(Centos 7 及以上)安装。
```shell
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
sudo yum install -y ob-deploy
source /etc/profile.d/obd.sh
```
### 方案2:使用源码安装。
使用源码安装 OBD 之前,请确认您已安装以下依赖:
- gcc
- python-devel
- openssl-devel
- xz-devel
- mysql-devel
Python2 使用以下命令安装:
```shell
pip install -r requirements.txt
sh build.sh
source /etc/profile.d/obd.sh
```
Python3 使用以下命令安装:
```shell
pip install -r requirements3.txt
sh build.sh
source /etc/profile.d/obd.sh
```
## 快速启动 OceanBase 数据库
安装 OBD 后,您可以使用 root 用户执行这组命令快速启动本地单节点 OceanBase 数据库。
在此之前您需要确认以下信息:
- 当前用户为 root。
- `2882``2883` 端口没有被占用。
- 您的机器内存应该不低于 8G。
- 您的机器 CPU 数目应该不低于 2。
> **说明:** 如果以上条件不满足,请移步[使用 OBD 启动 OceanBase 数据库集群](#使用-obd-启动-oceanbase-数据库集群)。
```shell
obd cluster deploy c1 -c ./example/mini-local-example.yaml
obd cluster start c1
# 使用 mysql 客户端链接到到 OceanBase 数据库。
mysql -h127.1 -uroot -P2883
```
## 使用 OBD 启动 OceanBase 数据库集群
按照以下步骤启动 OceanBase 数据库集群:
### 第 1 步. 选择配置文件
根据您的资源条件选择正确的配置文件:
#### 小规格开发模式
适用于个人设备(内存不低于 8G)。
- [本地单节点配置样例](./example/mini-local-example.yaml)
- [单节点配置样例](./example/mini-single-example.yaml)
- [三节点配置样例](./example/mini-distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/mini-single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/mini-distributed-with-obproxy-example.yaml)
#### 专业开发模式
适用于高配置 ECS 或物理服务器(不低于 16 核 64G 内存)。
- [本地单节点配置样例](./example/local-example.yaml)
- [单节点配置样例](./example/single-example.yaml)
- [三节点配置样例](./example/distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/distributed-with-obproxy-example.yaml)
本文以 [小规格开发模式-本地单节点](./example/mini-local-example.yaml) 为例,启动一个本地单节点的 OceanBase 数据库。
```shell
# 修改 home_path, 这是 OceanBase 数据库的工作目录。
# 修改 mysql_port,这是 OceanBase 数据库 SQL 服务协议端口号。后续将使用此端口连接数据库。
# 修改 rpc_port,这是 OceanBase 数据库集群内部通信的端口号。
vi ./example/mini-local-example.yaml
```
如果您的目标机器(OceanBase 数据库程序运行的机器)不是当前机器,请不要使用 `本地单节点配置样例`,改用其他样例。
同时您还需要修改配置文件顶部的用户密码信息。
```yaml
user:
username: <您的账号名>
password: <您的登录密码>
key_file: <您的私钥路径>
```
`username` 为登录到目标机器的用户名,确保您的用户名有 `home_path` 的写权限。`password``key_file`都是用于验证改用户的方式,通常情况下只需要填写一个。
> **注意:** 在配置秘钥路径后,如果您的秘钥不需要口令,请注释或者删掉`password`,以免`password`被视为秘钥口令用于登录,导致校验失败。
### 第 2 步. 部署和启动数据库
```shell
# 此命令会检查 home_path 和 data_dir 指向的目录是否为空。
# 若目录不为空,则报错。此时可以加上 -f 选项,强制清空。
obd cluster deploy lo -c local-example.yaml
# 此命令会检查系统参数 fs.aio-max-nr 是否不小于 1048576。
# 通常情况下一台机器启动一个节点不需要修改 fs.aio-max-nr。
# 当一台机器需要启动 4 个及以上的节点时,请务必修改 fs.aio-max-nr。
obd cluster start lo
```
### 第 3 步. 查看集群状态
```shell
# 参看obd管理的集群列表
obd cluster list
# 查看 lo 集群状态
obd cluster disply lo
```
### 第 4 步. 修改配置
OceanBase 数据库有数百个配置项,有些配置是耦合的,在您熟悉 OceanBase 数据库之前,不建议您修改示例配件文件中的配置。此处示例用来说明如何修改配置,并使之生效。
```shell
# 使用 edit-config 命令进入编辑模式,修改集群配置
obd cluster edit-config lo
# 修改 sys_bkgd_migration_retry_num 为 5
# 注意 sys_bkgd_migration_retry_num 值最小为 3
# 保存并退出后,obd 会告知您如何使得此次改动生效
# 此配置项仅需要 reload 即可生效
obd cluster reload lo
```
### 第 5 步. 停止集群
`stop` 命令用于停止一个运行中的集群。如果 `start` 命令执行失败,但有进程没有退出,请使用 `destroy` 命令。
```shell
obd cluster stop lo
```
### 第 6 步. 销毁集群
运行以下命令销毁集群:
```shell
# 启动集群时失败,可以能会有一些进程停留。
# 此时可用 -f 选项强制停止并销毁集群
obd cluster destroy lo
```
## 其他 OBD 命令
**OBD** 有多级命令,您可以在每个层级中使用 `-h/--help` 选项查看该子命令的帮助信息。
### 镜像和仓库命令组
#### `obd mirror clone`
将本地 RPM 包添加为镜像,之后您可以使用 **OBD 集群** 中相关的命令中启动镜像。
```shell
obd mirror clone <path> [-f]
```
参数 `path` 为 RPM 包的路径。
选项 `-f``--force``-f` 为可选选项。默认不开启。开启时,当镜像已经存在时,强制覆盖已有镜像。
#### `obd mirror create`
以本地目录为基础创建一个镜像。此命令主要用于使用 OBD 启动自行编译的 OceanBase 开源软件,您可以通过此命令将编译产物加入本地仓库,之后就可以使用 `obd cluster` 相关的命令启动它。
```shell
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
```
例如您根据 [OceanBase 编译指导书](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/get-the-oceanbase-database-by-using-source-code)编译成功后,可以使用 `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` 将编译产物加入OBD本地仓库。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 说明
--- | --- | --- |---
-n/--name | 是 | string | 组件名。如果您编译的是 OceanBase 数据库,则填写 oceanbase-ce。如果您编译的是 ODP,则填写 obproxy。
-p/--path | 是 | string | 编译目录。执行编译命令时的目录。OBD 会根据组件自动从该目录下获取所需的文件。
-V/--version | 是 | string | 版本号
-t/--tag | 否 | string | 镜像标签。您可以为您的创建的镜像定义多个标签,以英文逗号(,)间隔。
-f/--force | 否 | bool | 当镜像已存在,或者标签已存在时强制覆盖。默认不开启。
#### `obd mirror list`
显示镜像仓库或镜像列表
```shell
obd mirror list [mirror repo name]
```
参数 `mirror repo name` 为 镜像仓库名。该参数为可选参数。不填时,将显示镜像仓库列表。不为空时,则显示对应仓库的镜像列表。
#### `obd mirror update`
同步全部远程镜像仓库的信息
```shell
obd mirror update
```
### 集群命令组
OBD 集群命令操作的最小单位为一个部署配置。部署配置是一份 `yaml` 文件,里面包含各个整个部署的全部配置信息,包括服务器登录信息、组件信息、组件配置信息和组件服务器列表等。
在使用 OBD 启动一个集群之前,您需要先注册这个集群的部署配置到 OBD 中。您可以使用 `obd cluster edit-config` 创建一个空的部署配置,或使用 `obd cluster deploy -c config` 导入一个部署配置。
#### `obd cluster edit-config`
修改一个部署配置,当部署配置不存在时创建。
```shell
obd cluster edit-config <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster deploy`
根据配置部署集群。此命令会根据部署配置文件中组件的信息查找合适的镜像,并安装到本地仓库,此过程称为本地安装。
在将本地仓库中存在合适版本的组件分发给目标服务器,此过程称为远程安装。
在本地安装和远程安装时都会检查服务器是否存在组件运行所需的依赖。
此命令可以直接使用 OBD 中已注册的 `deploy name` 部署,也可以通过传入 `yaml` 的配置信息。
```shell
obd cluster deploy <deploy name> [-c <yaml path>] [-f] [-U]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- |---
-c/--config | 否 | string | 无 | 使用指定的 yaml 文件部署,并将部署配置注册到 OBD 中。<br>`deploy name` 存在时覆盖配置。<br>如果不使用该选项,则会根据 `deploy name` 查找已注册到OBD中的配置信息。
-f/--force | 否 | bool | false | 开启时,强制清空工作目录。<br>当组件要求工作目录为空且不使用改选项时,工作目录不为空会返回错误。
-U/--ulp/ --unuselibrepo | 否 | bool | false | 使用该选项将禁止 OBD 自动处理依赖。不开启的情况下,OBD 将在检查到缺失依赖时搜索相关的 libs 镜像并安装。使用该选项将会在对应的配置文件中天 **unuse_lib_repository: true**。也可以在配置文件中使用 **unuse_lib_repository: true** 开启。
#### `obd cluster start`
启动已部署的集群,成功时打印集群状态。
```shell
obd cluster start <deploy name> [-s]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-s``--strict-check`。部分组件在启动前会做相关的检查,当检查不通过的时候会报警告,不会强制停止流程。使用该选项可开启检查失败报错直接退出。建议开启,可以避免一些资源不足导致的启动失败。非必填项。数据类型为 `bool`。默认不开启。
#### `obd cluster list`
显示当前 OBD 内注册的全部集群(deploy name)的状态。
```shell
obd cluster list
```
#### `obd cluster display`
展示指定集群的状态。
```shell
obd cluster display <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster reload`
重载一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `reload` 命令应用修改。
需要注意的是,并非全部的配置项都可以通过 `reload` 来应用。有些配置项需要重启集群,甚至是重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster reload <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster restart`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `restart` 命令应用修改。
> **注意:** 并非所有的配置项都可以通过 `restart` 来应用。有些配置项需要重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster restart <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster redeploy`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `redeploy` 命令应用修改。
> **注意:** 该命令会销毁集群,重新部署,您集群中的数据会丢失,请先做好备份。
```shell
obd cluster redeploy <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster stop`
停止一个运行中的集群。
```shell
obd cluster stop <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster destroy`
销毁已部署的集群。如果集群处于运行中的状态,该命令会先尝试执行`stop`,成功后再执行`destroy`
```shell
obd cluster destroy <deploy name> [-f]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-f``--force-kill`。检查到工作目录下有运行中的进程时,强制停止。销毁前会做检查是有还有进程在运行中。这些运行中的进程可能是 **start** 失败留下的,也可能是因为配置与其他集群重叠,进程是其他集群的。但无论是哪个原因导致工作目录下有进程未退出,**destroy** 都会直接停止。使用该选项会强制停止这些运行中的进程,强制执行 **destroy**。非必填项。数据类型为 `bool`。默认不开启。
### 测试命令组
#### `obd test mysqltest`
对 OcecanBase 数据库或 ODP 组件的指定节点执行 mysqltest。mysqltest 需要 OBClient,请先安装 OBClient。
```shell
obd test mysqltest <deploy name> [--test-set <test-set>] [flags]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- | ---
-c/--component | 否 | string | 默认为空 | 待测试的组件名。候选项为 oceanbase-ce 和 obproxy。为空时,按 obproxy、oceanbase-ce 的顺序进行检查。检查到组件存在则不再遍历,使用命中的组件进行后续测试。
--test-server | 否 | string | 默指定的组件下服务器中的第一个节点。 | 必须是指定的组件下的某个节点名。
--mode | 否 | string | both | 测试模式。候选项为 mysql、both。
--user | 否 | string | root | 执行测试的用户名。
---password | 否 | string | 默认为空 | 执行测试的用户密码。
--mysqltest-bin | 否 | string | mysqltest | 指定的路径不可执行时使用 OBD 自带的 mysqltest。
--obclient-bin | 否 | string | obclient | OBClient 二进制文件所在目录。
--test-dir | 否 | string | ./mysql_test/t | mysqltest 所需的 **test-file** 存放的目录。test 文件找不到时会尝试在 OBD 内置中查找。
--result-dir | 否 | string | ./mysql_test/r | mysqltest 所需的 **result-file** 存放的目录。result 文件找不到时会尝试在 OBD 内置中查找。
--tmp-dir | 否 | string | ./tmp | 为 mysqltest tmpdir 选项。
--var-dir | 否 | string | ./var | 将在该目录下创建log目录并作为 logdir 选项传入 mysqltest。
--test-set | 否 | string | 无 | test case 数组。多个数组使用英文逗号(,)间隔。
--test-pattern | 否 | string | 无| test 文件名匹配的正则表达式。所有匹配表达式的case将覆盖test-set选项。
--suite | 否 | string | 无 | suite 数组。一个 suite 下包含多个 test。可以使用英文逗号(,)间隔。使用该选项后 --test-pattern 和 --test-set 都将失效。
--suite-dir | 否 | string | ./mysql_test/test_suite | 存放 suite 目录的目录。suite 目录找不到时会尝试在 OBD 内置中查找。
--all | 否 | bool | false | 执行 --suite-dir 下全部的 case。存放 suite 目录的目录。
--need-init | 否 | bool | false | 执行init sql 文件。一个新的集群要执行 mysqltest 前可能需要执行一些初始化文件,比如创建 case 所需要的账号和租户等。存放 suite 目录的目录。默认不开启。
--init-sql-dir | 否 | string | ../ | init sql 文件所在目录。sql 文件找不到时会尝试在obd内置中查找。
--init-sql-files | 否 | string | | 需要 init 时执行的 init sql 文件数组。英文逗号(,)间隔。不填时,如果需要 init,OBD 会根据集群配置执行内置的 init。
--auto-retry | 否 | bool | false | 失败时自动重部署集群进行重试。
## Q&A
### Q: 如何指定使用组件的版本?
A: 在部署配置文件中使用版本声明。例如,如果您使用的是 OceanBase-CE 3.1.0 版本,可以指定以下配置:
```yaml
oceanbase-ce:
version: 3.1.0
```
### Q: 如何指定使用特定版本的组件?
A: 在部署配置文件中使用 package_hash 或 tag 声明。
如果您给自己编译的 OceanBase-CE 设置了t ag,您可以使用 tag 来指定。如:
```yaml
oceanbase-ce:
tag: my-oceanbase
```
您也可以通过 package_hash 来指定特定的版本。当您使用 `obd mirror` 相关命令时会打印出组件的 md5 值,这个值即为 package_hash。
```yaml
oceanbase-ce:
package_hash: 929df53459404d9b0c1f945e7e23ea4b89972069
```
### Q:我修改了 OceanBase-CE 了代码,需要修改启动流程怎么办?
A:您可以修改 `~/.obd/plugins/oceanbase-ce/` 下的启动相关插件。比如您为 3.1.0 版本的 OceanBase-CE 添加了一个新的启动配置,可以修改 ``~/.obd/plugins/oceanbase-ce/3.1.0/start.py``
## 协议
OBD 采用 [GPL-3.0](./LICENSE) 协议。
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
import os
import ctypes
import struct
_ppc64_native_is_best = True
# dict mapping arch -> ( multicompat, best personality, biarch personality )
multilibArches = { "x86_64": ( "athlon", "x86_64", "athlon" ),
"sparc64v": ( "sparcv9v", "sparcv9v", "sparc64v" ),
"sparc64": ( "sparcv9", "sparcv9", "sparc64" ),
"ppc64": ( "ppc", "ppc", "ppc64" ),
"s390x": ( "s390", "s390x", "s390" ),
}
if _ppc64_native_is_best:
multilibArches["ppc64"] = ( "ppc", "ppc64", "ppc64" )
arches = {
# ia32
"athlon": "i686",
"i686": "i586",
"geode": "i686",
"i586": "i486",
"i486": "i386",
"i386": "noarch",
# amd64
"x86_64": "athlon",
"amd64": "x86_64",
"ia32e": "x86_64",
#ppc64le
"ppc64le": "noarch",
# ppc
"ppc64p7": "ppc64",
"ppc64pseries": "ppc64",
"ppc64iseries": "ppc64",
"ppc64": "ppc",
"ppc": "noarch",
# s390{,x}
"s390x": "s390",
"s390": "noarch",
# sparc
"sparc64v": "sparcv9v",
"sparc64": "sparcv9",
"sparcv9v": "sparcv9",
"sparcv9": "sparcv8",
"sparcv8": "sparc",
"sparc": "noarch",
# alpha
"alphaev7": "alphaev68",
"alphaev68": "alphaev67",
"alphaev67": "alphaev6",
"alphaev6": "alphapca56",
"alphapca56": "alphaev56",
"alphaev56": "alphaev5",
"alphaev5": "alphaev45",
"alphaev45": "alphaev4",
"alphaev4": "alpha",
"alpha": "noarch",
# arm
"armv7l": "armv6l",
"armv6l": "armv5tejl",
"armv5tejl": "armv5tel",
"armv5tel": "noarch",
#arm hardware floating point
"armv7hnl": "armv7hl",
"armv7hl": "noarch",
# arm64
"arm64": "noarch",
# aarch64
"aarch64": "noarch",
# super-h
"sh4a": "sh4",
"sh4": "noarch",
"sh3": "noarch",
#itanium
"ia64": "noarch",
}
# Will contain information parsed from /proc/self/auxv via _parse_auxv().
# Should move into rpm really.
_aux_vector = {
"platform": "",
"hwcap": 0,
}
def _try_read_cpuinfo():
""" Try to read /proc/cpuinfo ... if we can't ignore errors (ie. proc not
mounted). """
try:
return open("/proc/cpuinfo", "r")
except:
return []
def _parse_auxv():
""" Read /proc/self/auxv and parse it into global dict for easier access
later on, very similar to what rpm does. """
# In case we can't open and read /proc/self/auxv, just return
try:
data = open("/proc/self/auxv", "rb").read()
except:
return
# Define values from /usr/include/elf.h
AT_PLATFORM = 15
AT_HWCAP = 16
fmtlen = struct.calcsize("LL")
offset = 0
platform = ctypes.c_char_p()
# Parse the data and fill in _aux_vector dict
while offset <= len(data) - fmtlen:
at_type, at_val = struct.unpack_from("LL", data, offset)
if at_type == AT_PLATFORM:
platform.value = at_val
_aux_vector["platform"] = platform.value
if at_type == AT_HWCAP:
_aux_vector["hwcap"] = at_val
offset = offset + fmtlen
def getCanonX86Arch(arch):
#
if arch == "i586":
for line in _try_read_cpuinfo():
if line.startswith("model name"):
if line.find("Geode(TM)") != -1:
return "geode"
break
return arch
# only athlon vs i686 isn't handled with uname currently
if arch != "i686":
return arch
# if we're i686 and AuthenticAMD, then we should be an athlon
for line in _try_read_cpuinfo():
if line.startswith("vendor") and line.find("AuthenticAMD") != -1:
return "athlon"
# i686 doesn't guarantee cmov, but we depend on it
elif line.startswith("flags"):
if line.find("cmov") == -1:
return "i586"
break
return arch
def getCanonARMArch(arch):
# the %{_target_arch} macro in rpm will let us know the abi we are using
try:
import rpm
target = rpm.expandMacro('%{_target_cpu}')
if target.startswith('armv7h'):
return target
except:
pass
return arch
def getCanonPPCArch(arch):
# FIXME: should I do better handling for mac, etc?
if arch != "ppc64":
return arch
machine = None
for line in _try_read_cpuinfo():
if line.find("machine") != -1:
machine = line.split(':')[1]
break
platform = _aux_vector["platform"]
if machine is None and not platform:
return arch
try:
if platform.startswith("power") and int(platform[5:].rstrip('+')) >= 7:
return "ppc64p7"
except:
pass
if machine is None:
return arch
if machine.find("CHRP IBM") != -1:
return "ppc64pseries"
if machine.find("iSeries") != -1:
return "ppc64iseries"
return arch
def getCanonSPARCArch(arch):
# Deal with sun4v, sun4u, sun4m cases
SPARCtype = None
for line in _try_read_cpuinfo():
if line.startswith("type"):
SPARCtype = line.split(':')[1]
break
if SPARCtype is None:
return arch
if SPARCtype.find("sun4v") != -1:
if arch.startswith("sparc64"):
return "sparc64v"
else:
return "sparcv9v"
if SPARCtype.find("sun4u") != -1:
if arch.startswith("sparc64"):
return "sparc64"
else:
return "sparcv9"
if SPARCtype.find("sun4m") != -1:
return "sparcv8"
return arch
def getCanonX86_64Arch(arch):
if arch != "x86_64":
return arch
vendor = None
for line in _try_read_cpuinfo():
if line.startswith("vendor_id"):
vendor = line.split(':')[1]
break
if vendor is None:
return arch
if vendor.find("Authentic AMD") != -1 or vendor.find("AuthenticAMD") != -1:
return "amd64"
if vendor.find("GenuineIntel") != -1:
return "ia32e"
return arch
def getCanonArch(skipRpmPlatform = 0):
if not skipRpmPlatform and os.access("/etc/rpm/platform", os.R_OK):
try:
f = open("/etc/rpm/platform", "r")
line = f.readline()
f.close()
(arch, vendor, opersys) = line.split("-", 2)
return arch
except:
pass
arch = os.uname()[4]
_parse_auxv()
if (len(arch) == 4 and arch[0] == "i" and arch[2:4] == "86"):
return getCanonX86Arch(arch)
if arch.startswith("arm"):
return getCanonARMArch(arch)
if arch.startswith("ppc"):
return getCanonPPCArch(arch)
if arch.startswith("sparc"):
return getCanonSPARCArch(arch)
if arch == "x86_64":
return getCanonX86_64Arch(arch)
canonArch = getCanonArch()
def isMultiLibArch(arch=None):
"""returns true if arch is a multilib arch, false if not"""
if arch is None:
arch = canonArch
if arch not in arches: # or we could check if it is noarch
return 0
if arch in multilibArches:
return 1
if arches[arch] in multilibArches:
return 1
return 0
def getBaseArch(myarch=None):
"""returns 'base' arch for myarch, if specified, or canonArch if not.
base arch is the arch before noarch in the arches dict if myarch is not
a key in the multilibArches."""
if not myarch:
myarch = canonArch
if myarch not in arches: # this is dumb, but <shrug>
return myarch
if myarch.startswith("sparc64"):
return "sparc"
elif myarch == "ppc64le":
return "ppc64le"
elif myarch.startswith("ppc64") and not _ppc64_native_is_best:
return "ppc"
elif myarch.startswith("arm64"):
return "arm64"
elif myarch.startswith("armv7h"):
return "armhfp"
elif myarch.startswith("arm"):
return "arm"
if isMultiLibArch(arch=myarch):
if myarch in multilibArches:
return myarch
else:
return arches[myarch]
if myarch in arches:
basearch = myarch
value = arches[basearch]
while value != 'noarch':
basearch = value
value = arches[basearch]
return basearch
def getArchList(thisarch=None):
# this returns a list of archs that are compatible with arch given
if not thisarch:
thisarch = canonArch
archlist = [thisarch]
while thisarch in arches:
thisarch = arches[thisarch]
archlist.append(thisarch)
# hack hack hack
# sparc64v is also sparc64 compat
if archlist[0] == "sparc64v":
archlist.insert(1,"sparc64")
# if we're a weirdo arch - add noarch on there.
if len(archlist) == 1 and archlist[0] == thisarch:
archlist.append('noarch')
return archlist
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import time
import logging
from logging import handlers
from uuid import uuid1 as uuid
from optparse import OptionParser,OptionGroup
from core import ObdHome
from _stdio import IO
from log import Logger
from tool import DirectoryUtil, FileUtil
ROOT_IO = IO(1)
VERSION = '1.0.0'
class BaseCommand(object):
def __init__(self, name, summary):
self.name = name
self.summary = summary
self.args = []
self.cmds = []
self.opts = {}
self.prev_cmd = ''
self.is_init = False
self.parser = OptionParser(add_help_option=False)
self.parser.add_option('-h', '--help', action='callback', callback=self._show_help, help='show this help message and exit')
self.parser.add_option('-v', '--verbose', action='callback', callback=self._set_verbose, help='verbose operation')
def _set_verbose(self, *args, **kwargs):
ROOT_IO.set_verbose_level(0xfffffff)
def init(self, cmd, args):
if self.is_init is False:
self.prev_cmd = cmd
self.args = args
self.is_init = True
self.parser.prog = self.prev_cmd
option_list = self.parser.option_list[2:]
option_list.append(self.parser.option_list[0])
option_list.append(self.parser.option_list[1])
self.parser.option_list = option_list
return self
def parse_command(self):
self.opts, self.cmds = self.parser.parse_args(self.args)
return self.opts
def do_command(self):
raise NotImplementedError
def _show_help(self, *args, **kwargs):
ROOT_IO.print(self._mk_usage())
self.parser.exit(1)
def _mk_usage(self):
return self.parser.format_help()
class ObdCommand(BaseCommand):
OBD_PATH = os.path.join(os.environ.get('OBD_HOME', os.getenv('HOME')), '.obd')
def init_home(self):
version_path = os.path.join(self.OBD_PATH, 'version')
need_update = True
version_fobj = FileUtil.open(version_path, 'a+', stdio=ROOT_IO)
version_fobj.seek(0)
version = version_fobj.read()
if VERSION.split('.') > version.split('.'):
obd_plugin_path = os.path.join(self.OBD_PATH, 'plugins')
if DirectoryUtil.mkdir(self.OBD_PATH):
root_plugin_path = '/usr/obd/plugins'
if os.path.exists(root_plugin_path):
ROOT_IO.verbose('copy %s to %s' % (root_plugin_path, obd_plugin_path))
DirectoryUtil.copy(root_plugin_path, obd_plugin_path, ROOT_IO)
obd_mirror_path = os.path.join(self.OBD_PATH, 'mirror')
obd_remote_mirror_path = os.path.join(self.OBD_PATH, 'mirror/remote')
if DirectoryUtil.mkdir(obd_mirror_path):
root_remote_mirror = '/usr/obd/mirror/remote'
if os.path.exists(root_remote_mirror):
ROOT_IO.verbose('copy %s to %s' % (root_remote_mirror, obd_remote_mirror_path))
DirectoryUtil.copy(root_remote_mirror, obd_remote_mirror_path, ROOT_IO)
version_fobj.seek(0)
version_fobj.truncate()
version_fobj.write(VERSION)
version_fobj.flush()
version_fobj.close()
def do_command(self):
self.parse_command()
self.init_home()
try:
log_dir = os.path.join(self.OBD_PATH, 'log')
DirectoryUtil.mkdir(log_dir)
log_path = os.path.join(log_dir, 'obd')
logger = Logger('obd')
handler = handlers.TimedRotatingFileHandler(log_path, when='midnight', interval=1, backupCount=30)
handler.setFormatter(logging.Formatter("[%%(asctime)s] [%s] [%%(levelname)s] %%(message)s" % uuid(), "%Y-%m-%d %H:%M:%S"))
logger.addHandler(handler)
ROOT_IO.trace_logger = logger
obd = ObdHome(self.OBD_PATH, ROOT_IO)
ROOT_IO.track_limit += 1
return self._do_command(obd)
except NotImplementedError:
ROOT_IO.exception('command \'%s\' is not implemented' % self.prev_cmd)
except IOError:
ROOT_IO.exception('obd is running')
except SystemExit:
pass
except:
ROOT_IO.exception('Run Error')
return False
def _do_command(self, obd):
raise NotImplementedError
class MajorCommand(BaseCommand):
def __init__(self, name, summary):
super(MajorCommand, self).__init__(name, summary)
self.commands = {}
def _mk_usage(self):
if self.commands:
usage = ['%s <command> [options]\n\nAvailable Commands:\n' % self.prev_cmd]
commands = [x for x in self.commands.values() if not (hasattr(x, 'hidden') and x.hidden)]
commands.sort(key=lambda x: x.name)
for command in commands:
usage.append("%-14s %s\n" % (command.name, command.summary))
self.parser.set_usage('\n'.join(usage))
return super(MajorCommand, self)._mk_usage()
def do_command(self):
if not self.is_init:
ROOT_IO.error('%s command not init' % self.prev_cmd)
raise SystemExit('command not init')
if len(self.args) < 1:
ROOT_IO.print('You need to give some command')
self._show_help()
return False
base, args = self.args[0], self.args[1:]
if base not in self.commands:
self.parse_command()
self._show_help()
return False
cmd = '%s %s' % (self.prev_cmd, base)
ROOT_IO.track_limit += 1
return self.commands[base].init(cmd, args).do_command()
def register_command(self, command):
self.commands[command.name] = command
class MirrorCloneCommand(ObdCommand):
def __init__(self):
super(MirrorCloneCommand, self).__init__('clone', 'clone remote mirror or local rpmfile as mirror.')
self.parser.add_option('-f', '--force', action='store_true', help="overwrite when mirror exist")
def init(self, cmd, args):
super(MirrorCloneCommand, self).init(cmd, args)
self.parser.set_usage('%s [mirror source] [options]' % self.prev_cmd)
return self
def _do_command(self, obd):
if self.cmds:
for src in self.cmds:
if not obd.add_mirror(src, self.opts):
return False
return True
else:
return self._show_help()
class MirrorCreateCommand(ObdCommand):
def __init__(self):
super(MirrorCreateCommand, self).__init__('create', 'create a local mirror by local binary file')
self.parser.conflict_handler = 'resolve'
self.parser.add_option('-n', '--name', type='string', help="mirror's name")
self.parser.add_option('-t', '--tag', type='string', help="mirror's tag, use `,` interval")
self.parser.add_option('-n', '--name', type='string', help="mirror's name")
self.parser.add_option('-V', '--version', type='string', help="mirror's version")
self.parser.add_option('-p','--path', type='string', help="mirror's path", default='./')
self.parser.add_option('-f', '--force', action='store_true', help="overwrite when mirror exist")
self.parser.conflict_handler = 'error'
def _do_command(self, obd):
return obd.create_repository(self.opts)
class MirrorListCommand(ObdCommand):
def __init__(self):
super(MirrorListCommand, self).__init__('list', 'list mirror')
def show_pkg(self, name, pkgs):
ROOT_IO.print_list(
pkgs,
['name', 'version', 'release', 'arch', 'md5'],
lambda x: [x.name, x.version, x.release, x.arch, x.md5],
title='%s Package List' % name
)
def _do_command(self, obd):
if self.cmds:
name = self.cmds[0]
if name == 'local':
pkgs = obd.mirror_manager.local_mirror.get_all_pkg_info()
self.show_pkg(name, pkgs)
return True
else:
repos = obd.mirror_manager.get_mirrors()
for repo in repos:
if repo.name == name:
pkgs = repo.get_all_pkg_info()
self.show_pkg(name, pkgs)
return True
ROOT_IO.error('No such mirror repository: %s' % name)
return False
else:
repos = obd.mirror_manager.get_mirrors()
ROOT_IO.print_list(
repos,
['name', 'type', 'update time'],
lambda x: [x.name, x.mirror_type.value, time.strftime("%Y-%m-%d %H:%M", time.localtime(x.repo_age))],
title='Mirror Repository List'
)
return True
class MirrorUpdateCommand(ObdCommand):
def __init__(self):
super(MirrorUpdateCommand, self).__init__('update', 'update remote mirror info')
def _do_command(self, obd):
success = True
repos = obd.mirror_manager.get_remote_mirrors()
for repo in repos:
try:
success = repo.update_mirror() and success
except:
success = False
ROOT_IO.stop_loading('fail')
ROOT_IO.exception('fail to synchronize mirorr (%s)' % repo.name)
return success
class MirrorMajorCommand(MajorCommand):
def __init__(self):
super(MirrorMajorCommand, self).__init__('mirror', 'Manage a component repository for obd.')
self.register_command(MirrorListCommand())
self.register_command(MirrorCloneCommand())
self.register_command(MirrorCreateCommand())
self.register_command(MirrorUpdateCommand())
class ClusterMirrorCommand(ObdCommand):
def init(self, cmd, args):
super(ClusterMirrorCommand, self).init(cmd, args)
self.parser.set_usage('%s [cluster name] [options]' % self.prev_cmd)
return self
class ClusterDeployCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterDeployCommand, self).__init__('deploy', 'use current deploy config or an deploy yaml file to deploy a cluster')
self.parser.add_option('-c', '--config', type='string', help="cluster config yaml path")
self.parser.add_option('-f', '--force', action='store_true', help="remove all when home_path is not empty", default=False)
self.parser.add_option('-U', '--unuselibrepo', '--ulp', action='store_true', help="obd will not install libs when library is not found")
# self.parser.add_option('-F', '--fuzzymatch', action='store_true', help="enable fuzzy match when search package")
def _do_command(self, obd):
if self.cmds:
return obd.deploy_cluster(self.cmds[0], self.opts)
else:
return self._show_help()
class ClusterStartCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterStartCommand, self).__init__('start', 'start a cluster had deployed')
self.parser.add_option('-f', '--force-delete', action='store_true', help="cleanup when cluster had registered")
self.parser.add_option('-s', '--strict-check', action='store_true', help="prompt for errors instead of warnings when the check fails")
def _do_command(self, obd):
if self.cmds:
return obd.start_cluster(self.cmds[0], self.cmds[1:], self.opts)
else:
return self._show_help()
class ClusterStopCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterStopCommand, self).__init__('stop', 'stop a cluster had started')
def _do_command(self, obd):
if self.cmds:
return obd.stop_cluster(self.cmds[0])
else:
return self._show_help()
class ClusterDestroyCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterDestroyCommand, self).__init__('destroy', 'start a cluster had deployed')
self.parser.add_option('-f', '--force-kill', action='store_true', help="force kill when observer is running")
def _do_command(self, obd):
if self.cmds:
return obd.destroy_cluster(self.cmds[0], self.opts)
else:
return self._show_help()
class ClusterDisplayCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterDisplayCommand, self).__init__('display', 'display a cluster info')
def _do_command(self, obd):
if self.cmds:
return obd.display_cluster(self.cmds[0])
else:
return self._show_help()
class ClusterRestartCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterRestartCommand, self).__init__('restart', 'restart a cluster had started')
def _do_command(self, obd):
if self.cmds:
return obd.restart_cluster(self.cmds[0])
else:
return self._show_help()
class ClusterRedeployCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterRedeployCommand, self).__init__('redeploy', 'redeploy a cluster had started')
def _do_command(self, obd):
if self.cmds:
return obd.redeploy_cluster(self.cmds[0])
else:
return self._show_help()
class ClusterReloadCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterReloadCommand, self).__init__('reload', 'reload a cluster had started')
def _do_command(self, obd):
if self.cmds:
return obd.reload_cluster(self.cmds[0])
else:
return self._show_help()
class ClusterListCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterListCommand, self).__init__('list', 'show all deploy')
def _do_command(self, obd):
if self.cmds:
return self._show_help()
else:
return obd.list_deploy()
class ClusterEditConfigCommand(ClusterMirrorCommand):
def __init__(self):
super(ClusterEditConfigCommand, self).__init__('edit-config', 'edit a deploy config')
def _do_command(self, obd):
if self.cmds:
return obd.edit_deploy_config(self.cmds[0])
else:
return self._show_help()
class ClusterMajorCommand(MajorCommand):
def __init__(self):
super(ClusterMajorCommand, self).__init__('cluster', 'deploy and manager cluster')
self.register_command(ClusterDeployCommand())
self.register_command(ClusterStartCommand())
self.register_command(ClusterStopCommand())
self.register_command(ClusterDestroyCommand())
self.register_command(ClusterDisplayCommand())
self.register_command(ClusterListCommand())
self.register_command(ClusterRestartCommand())
self.register_command(ClusterRedeployCommand())
self.register_command(ClusterEditConfigCommand())
self.register_command(ClusterReloadCommand())
class TestMirrorCommand(ObdCommand):
def init(self, cmd, args):
super(TestMirrorCommand, self).init(cmd, args)
self.parser.set_usage('%s [cluster name] [options]' % self.prev_cmd)
return self
class MySQLTestCommand(TestMirrorCommand):
def __init__(self):
super(MySQLTestCommand, self).__init__('mysqltest', 'run mysqltest for a deploy')
self.parser.add_option('--component', type='string', help='the component for mysqltest')
self.parser.add_option('--test-server', type='string', help='the server for mysqltest, default the first root server in the component')
self.parser.add_option('--user', type='string', help='username for test', default='admin')
self.parser.add_option('--password', type='string', help='password for test', default='admin')
self.parser.add_option('--database', type='string', help='database for test', default='test')
self.parser.add_option('--mysqltest-bin', type='string', help='mysqltest bin path', default='/u01/obclient/bin/mysqltest')
self.parser.add_option('--obclient-bin', type='string', help='obclient bin path', default='obclient')
self.parser.add_option('--test-dir', type='string', help='test case file directory', default='./mysql_test/t')
self.parser.add_option('--result-dir', type='string', help='result case file directory', default='./mysql_test/r')
self.parser.add_option('--record-dir', type='string', help='the directory of the result file for mysqltest')
self.parser.add_option('--log-dir', type='string', help='the directory of the log file', default='./log')
self.parser.add_option('--tmp-dir', type='string', help='tmp dir to use when run mysqltest', default='./tmp')
self.parser.add_option('--var-dir', type='string', help='var dir to use when run mysqltest', default='./var')
self.parser.add_option('--test-set', type='string', help='test list, use `,` interval')
self.parser.add_option('--test-pattern', type='string', help='pattern for test file')
self.parser.add_option('--suite', type='string', help='suite list, use `,` interval')
self.parser.add_option('--suite-dir', type='string', help='suite case directory', default='./mysql_test/test_suite')
self.parser.add_option('--init-sql-dir', type='string', help='init sql directory', default='../')
self.parser.add_option('--init-sql-files', type='string', help='init sql file list, use `,` interval')
self.parser.add_option('--need-init', action='store_true', help='exec init sql', default=False)
self.parser.add_option('--auto-retry', action='store_true', help='auto retry when failed', default=False)
self.parser.add_option('--all', action='store_true', help='run all suite-dir case', default=False)
self.parser.add_option('--psmall', action='store_true', help='run psmall case', default=False)
# self.parser.add_option('--java', action='store_true', help='use java sdk', default=False)
def _do_command(self, obd):
if self.cmds:
return obd.mysqltest(self.cmds[0], self.opts)
else:
return self._show_help()
class TestMajorCommand(MajorCommand):
def __init__(self):
super(TestMajorCommand, self).__init__('test', 'run test for a running deploy')
self.register_command(MySQLTestCommand())
class BenchMajorCommand(MajorCommand):
def __init__(self):
super(BenchMajorCommand, self).__init__('bench', '')
class MainCommand(MajorCommand):
def __init__(self):
super(MainCommand, self).__init__('obd', '')
self.register_command(MirrorMajorCommand())
self.register_command(ClusterMajorCommand())
self.register_command(TestMajorCommand())
self.parser.version = '''OceanBase Deploy: %s
Copyright (C) 2021 OceanBase
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.''' % (VERSION)
self.parser._add_version_option()
if __name__ == '__main__':
defaultencoding = 'utf-8'
if sys.getdefaultencoding() != defaultencoding:
try:
from imp import reload
except:
pass
reload(sys)
sys.setdefaultencoding(defaultencoding)
sys.path.append('/usr/obd/lib/site-packages')
ROOT_IO.track_limit += 2
if MainCommand().init('obd', sys.argv[1:]).do_command():
ROOT_IO.exit(0)
ROOT_IO.exit(1)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import re
import getpass
from copy import deepcopy
from enum import Enum
from tool import ConfigUtil, FileUtil, YamlLoader
from _manager import Manager
from _repository import Repository
yaml = YamlLoader()
class UserConfig(object):
DEFAULT = {
'username': getpass.getuser(),
'password': None,
'key_file': None,
'port': 22,
'timeout': 30
}
def __init__(self, username=None, password=None, key_file=None, port=None, timeout=None):
self.username = username if username else self.DEFAULT['username']
self.password = password
self.key_file = key_file if key_file else self.DEFAULT['key_file']
self.port = port if port else self.DEFAULT['port']
self.timeout = timeout if timeout else self.DEFAULT['timeout']
class ServerConfig(object):
def __init__(self, ip, name=None):
self.ip = ip
self._name = name
@property
def name(self):
return self._name if self._name else self.ip
def __str__(self):
return '%s(%s)' % (self._name, self.ip) if self._name else self.ip
def __hash__(self):
return hash(self.__str__())
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.ip == other.ip and self.name == other.name
if isinstance(other, dict):
return self.ip == other['ip'] and self.name == other['name']
class ServerConfigFlyweightFactory(object):
_CACHE = {}
@staticmethod
def get_instance(ip, name=None):
server = ServerConfig(ip, name)
_key = server.__str__()
if _key not in ServerConfigFlyweightFactory._CACHE:
ServerConfigFlyweightFactory._CACHE[_key] = server
return ServerConfigFlyweightFactory._CACHE[_key]
class ClusterConfig(object):
def __init__(self, servers, name, version, tag, package_hash):
self.version = version
self.tag = tag
self.name = name
self.package_hash = package_hash
self._temp_conf = {}
self._default_conf = {}
self._global_conf = {}
self._server_conf = {}
self._cache_server = {}
self.servers = servers
for server in servers:
self._server_conf[server] = {}
self._cache_server[server] = None
self._deploy_config = None
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self._global_conf == other._global_conf and self._server_conf == other._server_conf
def set_deploy_config(self, _deploy_config):
if self._deploy_config is None:
self._deploy_config = _deploy_config
return True
return False
def update_server_conf(self, server, key, value, save=True):
if self._deploy_config is None:
return False
if not self._deploy_config.update_component_server_conf(self.name, server, key, value, save):
return False
self._server_conf[server][key] = value
if self._cache_server[server] is not None:
self._cache_server[server][key] = value
return True
def update_global_conf(self, key, value, save=True):
if self._deploy_config is None:
return False
if not self._deploy_config.update_component_global_conf(self.name, key, value, save):
return False
self._global_conf[key] = value
for server in self._cache_server:
if self._cache_server[server] is not None:
self._cache_server[server][key] = value
return True
def get_unconfigured_require_item(self, server):
items = []
config = self.get_server_conf(server)
for key in self._default_conf:
if key in config:
continue
items.append(key)
return items
def get_server_conf_with_default(self, server):
config = {}
for key in self._temp_conf:
if self._temp_conf[key].default is not None:
config[key] = self._temp_conf[key].default
config.update(self.get_server_conf(server))
return config
def get_need_redeploy_items(self, server):
items = {}
config = self.get_server_conf(server)
for key in config:
if key in self._temp_conf and self._temp_conf[key].need_redeploy:
items[key] = config[key]
return items
def get_need_restart_items(self, server):
items = {}
config = self.get_server_conf(server)
for key in config:
if key in self._temp_conf and self._temp_conf[key].need_restart:
items[key] = config[key]
return items
def update_temp_conf(self, temp_conf):
self._default_conf = {}
self._temp_conf = temp_conf
for key in self._temp_conf:
if self._temp_conf[key].require:
self._default_conf[key] = self._temp_conf[key].default
self.set_global_conf(self._global_conf) # 更新全局配置
def set_global_conf(self, conf):
self._global_conf = deepcopy(self._default_conf)
self._global_conf.update(conf)
for server in self._cache_server:
self._cache_server[server] = None
def add_server_conf(self, server, conf):
if server not in self.servers:
self.servers.append(server)
self._server_conf[server] = conf
self._cache_server[server] = None
def get_global_conf(self):
return self._global_conf
def get_server_conf(self, server):
if server not in self._server_conf:
return None
if self._cache_server[server] is None:
conf = deepcopy(self._global_conf)
conf.update(self._server_conf[server])
self._cache_server[server] = conf
return self._cache_server[server]
class DeployStatus(Enum):
STATUS_CONFIGUREING = 'configuring'
STATUS_CONFIGURED = 'configured'
STATUS_DEPLOYING = 'delopying'
STATUS_DEPLOYED = 'deployed'
STATUS_RUNNING = 'running'
STATUS_STOPING = 'stoping'
STATUS_STOPPED = 'stopped'
STATUS_DESTROYING = 'destroying'
STATUS_DESTROYED = 'destroyed'
class DeployConfigStatus(Enum):
UNCHNAGE = 'unchange'
NEED_RELOAD = 'need reload'
NEED_RESTART = 'need restart'
NEED_REDEPLOY = 'need redeploy'
class DeployInfo(object):
def __init__(self, name, status, components={}, config_status=DeployConfigStatus.UNCHNAGE):
self.status = status
self.name = name
self.components = components
self.config_status = config_status
def __str__(self):
info = ['%s (%s)' % (self.name, self.status.value)]
for name in self.components:
info.append('%s-%s' % (name, self.components[name]))
return '\n'.join(info)
class DeployConfig(object):
def __init__(self, yaml_path, yaml_loader=yaml):
self._user = None
self.unuse_lib_repository = False
self.components = {}
self._src_data = None
self.yaml_path = yaml_path
self.yaml_loader = yaml_loader
self._load()
@property
def user(self):
return self._user
def set_unuse_lib_repository(self, status):
if self.unuse_lib_repository != status:
self.unuse_lib_repository = status
self._src_data['unuse_lib_repository'] = status
return self._dump()
return True
def _load(self):
try:
with open(self.yaml_path, 'rb') as f:
self._src_data = self.yaml_loader.load(f)
for key in self._src_data:
if key == 'user':
self.set_user_conf(UserConfig(
ConfigUtil.get_value_from_dict(self._src_data[key], 'username'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'password'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'key_file'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'port', 0, int),
ConfigUtil.get_value_from_dict(self._src_data[key], 'timeout', 0, int),
))
elif key == 'unuse_lib_repository':
self.unuse_lib_repository = self._src_data['unuse_lib_repository']
else:
self._add_component(key, self._src_data[key])
except:
pass
if not self.user:
self.set_user_conf(UserConfig())
def _dump(self):
try:
with open(self.yaml_path, 'w') as f:
self.yaml_loader.dump(self._src_data, f)
return True
except:
pass
return False
def dump(self):
return self._dump()
def set_user_conf(self, conf):
self._user = conf
def update_component_server_conf(self, component_name, server, key, value, save=True):
if component_name not in self.components:
return False
cluster_config = self.components[component_name]
if server not in cluster_config.servers:
return False
component_config = self._src_data[component_name]
if server.name not in component_config:
component_config[server.name] = {key: value}
else:
component_config[server.name][key] = value
return self.dump() if save else True
def update_component_global_conf(self, component_name, key, value, save=True):
if component_name not in self.components:
return False
component_config = self._src_data[component_name]
if 'global' not in component_config:
component_config['global'] = {key: value}
else:
component_config['global'][key] = value
return self.dump() if save else True
def _add_component(self, component_name, conf):
if 'servers' in conf and isinstance(conf['servers'], list):
servers = []
for server in conf['servers']:
if isinstance(server, dict):
ip = ConfigUtil.get_value_from_dict(server, 'ip', transform_func=str)
name = ConfigUtil.get_value_from_dict(server, 'name', transform_func=str)
else:
ip = server
name = None
if not re.match('^\d{1,3}(\\.\d{1,3}){3}$', ip):
continue
server = ServerConfigFlyweightFactory.get_instance(ip, name)
if server not in servers:
servers.append(server)
else:
servers = []
cluster_conf = ClusterConfig(
servers,
component_name,
ConfigUtil.get_value_from_dict(conf, 'version', None, str),
ConfigUtil.get_value_from_dict(conf, 'tag', None, str),
ConfigUtil.get_value_from_dict(conf, 'package_hash', None, str)
)
if 'global' in conf:
cluster_conf.set_global_conf(conf['global'])
for server in servers:
if server.name in conf:
cluster_conf.add_server_conf(server, conf[server.name])
cluster_conf.set_deploy_config(self)
self.components[component_name] = cluster_conf
class Deploy(object):
DEPLOY_STATUS_FILE = '.data'
DEPLOY_YAML_NAME = 'config.yaml'
def __init__(self, config_dir, stdio=None):
self.config_dir = config_dir
self.name = os.path.split(config_dir)[1]
self._info = None
self._config = None
self.stdio = stdio
def use_model(self, name, repository, dump=True):
self.deploy_info.components[name] = {
'hash': repository.hash,
'version': repository.version,
}
return self._dump_deploy_info() if dump else True
@staticmethod
def get_deploy_file_path(path):
return os.path.join(path, Deploy.DEPLOY_STATUS_FILE)
@staticmethod
def get_deploy_yaml_path(path):
return os.path.join(path, Deploy.DEPLOY_YAML_NAME)
@staticmethod
def get_temp_deploy_yaml_path(path):
return os.path.join(path, 'tmp_%s' % Deploy.DEPLOY_YAML_NAME)
@property
def deploy_info(self):
if self._info is None:
try:
path = self.get_deploy_file_path(self.config_dir)
with open(path, 'rb') as f:
data = yaml.load(f)
self._info = DeployInfo(
data['name'],
getattr(DeployStatus, data['status'], DeployStatus.STATUS_CONFIGURED),
ConfigUtil.get_value_from_dict(data, 'components', {}),
getattr(DeployConfigStatus, ConfigUtil.get_value_from_dict(data, 'config_status', '_'), DeployConfigStatus.UNCHNAGE),
)
except:
self._info = DeployInfo(self.name, DeployStatus.STATUS_CONFIGURED)
return self._info
@property
def deploy_config(self):
if self._config is None:
try:
path = self.get_deploy_yaml_path(self.config_dir)
self._config = DeployConfig(path, YamlLoader(stdio=self.stdio))
deploy_info = self.deploy_info
for component_name in deploy_info.components:
if component_name not in self._config.components:
continue
config = deploy_info.components[component_name]
cluster_config = self._config.components[component_name]
if 'version' in config and config['version']:
cluster_config.version = config['version']
if 'hash' in config and config['hash']:
cluster_config.package_hash = config['hash']
except:
pass
return self._config
def apply_temp_deploy_config(self):
src_yaml_path = self.get_temp_deploy_yaml_path(self.config_dir)
target_src_path = self.get_deploy_yaml_path(self.config_dir)
try:
FileUtil.move(src_yaml_path, target_src_path)
self._config = None
self.update_deploy_config_status(DeployConfigStatus.UNCHNAGE)
return True
except Exception as e:
self.stdio and getattr(self.stdio, 'exception', print)('mv %s to %s failed, error: \n%s' % (src_yaml_path, target_src_path, e))
return False
def _dump_deploy_info(self):
path = self.get_deploy_file_path(self.config_dir)
self.stdio and getattr(self.stdio, 'verbose', print)('dump deploy info to %s' % path)
try:
with open(path, 'w') as f:
data = {
'name': self.deploy_info.name,
'components': self.deploy_info.components,
'status': self.deploy_info.status.name,
'config_status': self.deploy_info.config_status.name,
}
yaml.dump(data, f)
return True
except:
self.stdio and getattr(self.stdio, 'exception', print)('dump deploy info to %s failed' % path)
return False
def update_deploy_status(self, status):
if isinstance(status, DeployStatus):
self.deploy_info.status = status
if DeployStatus.STATUS_DESTROYED == status:
self.deploy_info.components = {}
return self._dump_deploy_info()
return False
def update_deploy_config_status(self, status):
if isinstance(status, DeployConfigStatus):
self.deploy_info.config_status = status
return self._dump_deploy_info()
return False
class DeployManager(Manager):
RELATIVE_PATH = 'cluster/'
def __init__(self, home_path, stdio=None):
super(DeployManager, self).__init__(home_path, stdio)
def get_deploy_configs(self):
configs = []
for file_name in os.listdir(self.path):
path = os.path.join(self.path, file_name)
if os.path.isdir(path):
configs.append(Deploy(path, self.stdio))
return configs
def get_deploy_config(self, name):
path = os.path.join(self.path, name)
if os.path.isdir(path):
return Deploy(path, self.stdio)
return None
def create_deploy_config(self, name, src_yaml_path):
config_dir = os.path.join(self.path, name)
target_src_path = Deploy.get_deploy_yaml_path(config_dir)
self._mkdir(config_dir)
if FileUtil.copy(src_yaml_path, target_src_path, self.stdio):
return Deploy(config_dir, self.stdio)
else:
self._rm(config_dir)
return None
def remove_deploy_config(self, name):
config_dir = os.path.join(self.path, name)
self._rm(config_dir)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
from tool import DirectoryUtil
class Manager(object):
RELATIVE_PATH = ''
def __init__(self, home_path, stdio=None):
self.stdio = stdio
self.path = os.path.join(home_path, self.RELATIVE_PATH)
self.is_init = self._mkdir(self.path)
def _mkdir(self, path):
return DirectoryUtil.mkdir(path, stdio=self.stdio)
def _rm(self, path):
return DirectoryUtil.rm(path, self.stdio)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import re
import os
import sys
import time
import pickle
import string
import requests
from glob import glob
from enum import Enum
from xml.etree import cElementTree
try:
from ConfigParser import ConfigParser
except:
from configparser import ConfigParser
from _arch import getArchList, getBaseArch
from _rpm import Package
from tool import ConfigUtil, FileUtil
from _manager import Manager
_KEYCRE = re.compile(r"\$(\w+)")
_ARCH = getArchList()
_RELEASE = None
for path in glob('/etc/*-release'):
with FileUtil.open(path) as f:
info = f.read()
m = re.search('VERSION_ID="(\d+)', info)
if m:
_RELEASE = m.group(1)
break
_SERVER_VARS = {
'basearch': getBaseArch(),
'releasever': _RELEASE
}
class MirrorRepositoryType(Enum):
LOCAL = 'local'
REMOTE = 'remote'
class MirrorRepository(object):
MIRROR_TYPE = None
def __init__(self, mirror_path, stdio=None):
self.stdio = stdio
self.mirror_path = mirror_path
self.name = os.path.split(mirror_path)[1]
@property
def mirror_type(self):
return self.MIRROR_TYPE
def get_all_pkg_info(self):
return []
def get_best_pkg(self, **pattern):
info = self.get_best_pkg_info(**pattern)
return self.get_rpm_pkg_by_info(info) if info else None
def get_exact_pkg(self, **pattern):
info = self.get_exact_pkg_info(**pattern)
return self.get_rpm_pkg_by_info(info) if info else None
def get_rpm_pkg_by_info(self, pkg_info):
return None
def get_pkgs_info(self, **pattern):
return []
def get_best_pkg_info(self, **pattern):
return None
def get_exact_pkg_info(self, **pattern):
return None
def get_pkgs_info_with_score(self, **pattern):
return []
class RemoteMirrorRepository(MirrorRepository):
class RemotePackageInfo(object):
def __init__(self, elem):
self.name = None
self.arch = None
self.epoch = None
self.release = None
self.version = None
self.location = (None, None)
self.checksum = (None,None) # type,value
self.openchecksum = (None,None) # type,value
self.time = (None, None)
self._parser(elem)
@property
def md5(self):
return self.checksum[1]
def __str__(self):
url = self.location[1]
if self.location[0]:
url = self.location[0] + url
return url
def _parser(self, elem):
tags = self.__dict__.keys()
for child in elem:
child_name = RemoteMirrorRepository.ns_cleanup(child.tag)
if child_name == 'location':
relative = child.attrib.get('href')
base = child.attrib.get('base')
self.location = (base, relative)
elif child_name == 'checksum':
csum_value = child.text
csum_type = child.attrib.get('type')
self.checksum = (csum_type,csum_value)
elif child_name == 'open-checksum':
csum_value = child.text
csum_type = child.attrib.get('type')
self.openchecksum = (csum_type, csum_value)
elif child_name == 'version':
self.epoch = child.attrib.get('epoch')
self.version = child.attrib.get('ver')
self.release = child.attrib.get('rel')
elif child_name == 'time':
build = child.attrib.get('build')
_file = child.attrib.get('file')
self.location = (_file, build)
elif child_name == 'arch':
self.arch = child.text
elif child_name == 'name':
self.name = child.text
class RepoData(object):
def __init__(self, elem):
self.type = None
self.type = elem.attrib.get('type')
self.location = (None, None)
self.checksum = (None,None) # type,value
self.openchecksum = (None,None) # type,value
self.timestamp = None
self.dbversion = None
self.size = None
self.opensize = None
self.deltas = []
self._parser(elem)
def _parser(self, elem):
for child in elem:
child_name = RemoteMirrorRepository.ns_cleanup(child.tag)
if child_name == 'location':
relative = child.attrib.get('href')
base = child.attrib.get('base')
self.location = (base, relative)
elif child_name == 'checksum':
csum_value = child.text
csum_type = child.attrib.get('type')
self.checksum = (csum_type,csum_value)
elif child_name == 'open-checksum':
csum_value = child.text
csum_type = child.attrib.get('type')
self.openchecksum = (csum_type, csum_value)
elif child_name == 'timestamp':
self.timestamp = child.text
elif child_name == 'database_version':
self.dbversion = child.text
elif child_name == 'size':
self.size = child.text
elif child_name == 'open-size':
self.opensize = child.text
elif child_name == 'delta':
delta = RepoData(child)
delta.type = self.type
self.deltas.append(delta)
MIRROR_TYPE = MirrorRepositoryType.REMOTE
REMOTE_REPOMD_FILE = '/repodata/repomd.xml'
REPOMD_FILE = 'repomd.xml'
OTHER_DB_FILE = 'other_db.xml'
REPO_AGE_FILE = '.rege_age'
PRIMARY_REPOMD_TYPE = 'primary'
def __init__(self, mirror_path, meta_data, stdio=None):
self.baseurl = None
self.repomd_age = 0
self.repo_age = 0
self.priority = 1
self.gpgcheck = False
self._db = None
self._repomds = None
super(RemoteMirrorRepository, self).__init__(mirror_path, stdio=stdio)
self.baseurl = self.var_replace(meta_data['baseurl'], _SERVER_VARS)
self.gpgcheck = ConfigUtil.get_value_from_dict(meta_data, 'gpgcheck', 0, int) > 0
self.priority = 100 - ConfigUtil.get_value_from_dict(meta_data, 'priority', 99, int)
if os.path.exists(mirror_path):
self._load_repo_age()
repo_age = ConfigUtil.get_value_from_dict(meta_data, 'repo_age', 0, int)
if repo_age > self.repo_age:
self.repo_age = repo_age
self.update_mirror()
@property
def db(self):
if self._db is not None:
return self._db
primary_repomd = self._get_repomd_by_type(self.PRIMARY_REPOMD_TYPE)
if not primary_repomd:
return []
file_path = self._get_repomd_data_file(primary_repomd)
if not file_path:
return []
fp = FileUtil.unzip(file_path)
if not fp:
return []
self._db = {}
parser = cElementTree.iterparse(fp)
for event, elem in parser:
if RemoteMirrorRepository.ns_cleanup(elem.tag) == 'package' and elem.attrib.get('type') == 'rpm':
info = RemoteMirrorRepository.RemotePackageInfo(elem)
# self._db.append(info)
self._db[info.md5] = info
return self._db
@staticmethod
def ns_cleanup(qn):
return qn if qn.find('}') == -1 else qn.split('}')[1]
@staticmethod
def get_repo_age_file(mirror_path):
return os.path.join(mirror_path, RemoteMirrorRepository.REPO_AGE_FILE)
@staticmethod
def get_repomd_file(mirror_path):
return os.path.join(mirror_path, RemoteMirrorRepository.REPOMD_FILE)
@staticmethod
def get_other_db_file(mirror_path):
return os.path.join(mirror_path, RemoteMirrorRepository.OTHER_DB_FILE)
@staticmethod
def var_replace(string, var):
if not var:
return string
done = []
while string:
m = _KEYCRE.search(string)
if not m:
done.append(string)
break
varname = m.group(1).lower()
replacement = var.get(varname, m.group())
start, end = m.span()
done.append(string[:start])
done.append(replacement)
string = string[end:]
return ''.join(done)
def _load_repo_age(self):
try:
with open(self.get_repo_age_file(self.mirror_path), 'r') as f:
self.repo_age = int(f.read())
except:
pass
def _dump_repo_age_data(self):
try:
with open(self.get_repo_age_file(self.mirror_path), 'w') as f:
f.write(str(self.repo_age))
return True
except:
pass
return False
def _get_repomd_by_type(self, repomd_type):
repodmds = self.get_repomds()
for repodmd in repodmds:
if repodmd.type == repomd_type:
return repodmd
def _get_repomd_data_file(self, repomd):
file_name = repomd.location[1]
repomd_name = file_name.split('-')[-1]
file_path = os.path.join(self.mirror_path, file_name)
if os.path.exists(file_path):
return file_path
base_url = repomd.location[0] if repomd.location[0] else self.baseurl
url = '%s/%s' % (base_url, repomd.location[1])
if self.download_file(url, file_path, self.stdio):
return file_path
def update_mirror(self):
self.stdio and getattr(self.stdio, 'start_loading')('Update %s' % self.name)
self.get_repomds(True)
primary_repomd = self._get_repomd_by_type(self.PRIMARY_REPOMD_TYPE)
if not primary_repomd:
self.stdio and getattr(self.stdio, 'stop_loading')('fail')
return False
file_path = self._get_repomd_data_file(primary_repomd)
if not file_path:
self.stdio and getattr(self.stdio, 'stop_loading')('fail')
return False
self._db = None
self.repo_age = int(time.time())
self._dump_repo_age_data()
self.stdio and getattr(self.stdio, 'stop_loading')('succeed')
return True
def get_repomds(self, update=False):
path = self.get_repomd_file(self.mirror_path)
if update or not os.path.exists(path):
url = '%s/%s' % (self.baseurl, self.REMOTE_REPOMD_FILE)
self.download_file(url, path, self.stdio)
self._repomds = None
if self._repomds is None:
self._repomds = []
try:
parser = cElementTree.iterparse(path)
for event, elem in parser:
if RemoteMirrorRepository.ns_cleanup(elem.tag) == 'data':
repod = RemoteMirrorRepository.RepoData(elem)
self._repomds.append(repod)
except:
pass
return self._repomds
def get_all_pkg_info(self):
return [self.db[key] for key in self.db]
def get_rpm_pkg_by_info(self, pkg_info):
file_name = pkg_info.location[1]
file_path = os.path.join(self.mirror_path, file_name)
self.stdio and getattr(self.stdio, 'verbose', print)('get RPM package by %s' % pkg_info)
if not os.path.exists(file_path) or os.stat(file_path)[8] < self.repo_age:
base_url = pkg_info.location[0] if pkg_info.location[0] else self.baseurl
url = '%s/%s' % (base_url, pkg_info.location[1])
if not self.download_file(url, file_path, self.stdio):
return None
return Package(file_path)
def get_pkgs_info(self, **pattern):
matchs = self.get_pkgs_info_with_score(**pattern)
if matchs:
return [info for info in sorted(matchs, key=lambda x: x[1], reversed=True)]
return matchs
def get_best_pkg_info(self, **pattern):
matchs = self.get_pkgs_info_with_score(**pattern)
if matchs:
return Package(max(matchs, key=lambda x: x[1])[0].path)
return None
def get_exact_pkg_info(self, **pattern):
self.stdio and getattr(self.stdio, 'verbose', print)('check md5 in pattern or not')
if 'md5' in pattern and pattern['md5']:
return self.db[pattern['md5']] if pattern['md5'] in self.db else None
self.stdio and getattr(self.stdio, 'verbose', print)('check name in pattern or not')
if 'name' not in pattern and not pattern['name']:
return None
name = pattern['name']
self.stdio and getattr(self.stdio, 'verbose', print)('check arch in pattern or not')
arch = getArchList(pattern['arch']) if 'arch' in pattern and pattern['arch'] else _ARCH
self.stdio and getattr(self.stdio, 'verbose', print)('check release in pattern or not')
release = pattern['release'] if 'release' in pattern else None
self.stdio and getattr(self.stdio, 'verbose', print)('check version in pattern or not')
version = pattern['version'] if 'version' in pattern else None
for key in self.db:
info = self.db[key]
if info.name != name:
continue
if info.arch not in arch:
continue
if release and info.release != release:
continue
if version and version != info.version:
continue
return info
return None
def get_pkgs_info_with_score(self, **pattern):
matchs = []
self.stdio and getattr(self.stdio, 'verbose', print)('check md5 in pattern or not')
if 'md5' in pattern and pattern['md5']:
return [self.db[pattern['md5']], (0xfffffffff, )] if pattern['md5'] in self.db else matchs
self.stdio and getattr(self.stdio, 'verbose', print)('check name in pattern or not')
if 'name' not in pattern and not pattern['name']:
return matchs
self.stdio and getattr(self.stdio, 'verbose', print)('check arch in pattern or not')
if 'arch' in pattern and pattern['arch']:
pattern['arch'] = getArchList(pattern['arch'])
else:
pattern['arch'] = _ARCH
self.stdio and getattr(self.stdio, 'verbose', print)('check version in pattern or not')
if 'version' in pattern and pattern['version']:
pattern['version'] += '.'
for key in self.db:
info = self.db[key]
if pattern['name'] in info.name:
matchs.append([info, self.match_score(info, **pattern)])
return matchs
def match_score(self, info, name, arch, version=None):
if info.arch not in arch:
return [0, ]
info_version = '%s.' % info.version
if version and info_version.find(version) != 0:
return [0 ,]
c = info.version.split('.')
c.insert(0, len(name) / len(info.name))
return c
@staticmethod
def validate_repoid(repoid):
"""Return the first invalid char found in the repoid, or None."""
allowed_chars = string.ascii_letters + string.digits + '-_.:'
for char in repoid:
if char not in allowed_chars:
return char
else:
return None
@staticmethod
def download_file(url, save_path, stdio=None):
try:
with requests.get(url, stream=True) as fget:
file_size = int(fget.headers["Content-Length"])
if stdio:
print_bar = True
for func in ['start_progressbar', 'update_progressbar', 'finish_progressbar']:
if getattr(stdio, func, False) is False:
print_bar = False
break
else:
print_bar = False
if print_bar:
_, fine_name = os.path.split(save_path)
units = {"B": 1, "K": 1<<10, "M": 1<<20, "G": 1<<30, "T": 1<<40}
for unit in units:
num = file_size / units[unit]
if num < 1024:
break
stdio.start_progressbar('Download %s (%.2f %s)' % (fine_name, num, unit), file_size)
chunk_size = 512
file_done = 0
with FileUtil.open(save_path, "wb", stdio) as fw:
for chunk in fget.iter_content(chunk_size):
fw.write(chunk)
file_done = file_done + chunk_size
if print_bar and file_done <= file_size:
stdio.update_progressbar(file_done)
print_bar and stdio.finish_progressbar()
return True
except:
FileUtil.rm(save_path)
stdio and getattr(stdio, 'exception', print)('Failed to download %s to %s' % (url, save_path))
return False
class LocalMirrorRepository(MirrorRepository):
MIRROR_TYPE = MirrorRepositoryType.LOCAL
_DB_FILE = '.db'
def __init__(self, mirror_path, stdio=None):
super(LocalMirrorRepository, self).__init__(mirror_path, stdio=stdio)
self.db = {}
self.db_path = os.path.join(mirror_path, self._DB_FILE)
self._load_db()
@property
def repo_age(self):
return int(time.time())
def _load_db(self):
try:
with open(self.db_path, 'rb') as f:
db = pickle.load(f)
for key in db:
data = db[key]
path = getattr(data, 'path', False)
if not path or not os.path.exists(path):
continue
self.db[key] = data
except:
pass
def _dump_db(self):
# 所有 dump方案都为临时
try:
with open(self.db_path, 'wb') as f:
pickle.dump(self.db, f)
return True
except:
pass
return False
def exist_pkg(self, pkg):
return pkg.md5 in self.db
def add_pkg(self, pkg):
target_path = os.path.join(self.mirror_path, pkg.file_name)
try:
src_path = pkg.path
self.stdio and getattr(self.stdio, 'verbose', print)('RPM hash check')
if target_path != src_path:
if pkg.md5 in self.db:
t_info = self.db[pkg.md5]
self.stdio and getattr(self.stdio, 'verbose', print)('copy %s to %s' % (src_path, target_path))
if t_info.path == target_path:
del self.db[t_info.md5]
FileUtil.copy(src_path, target_path)
else:
FileUtil.copy(src_path, target_path)
try:
self.stdio and getattr(self.stdio, 'verbose', print)('remove %s' % t_info.path)
os.remove(t_info.path)
except:
pass
else:
FileUtil.copy(src_path, target_path)
pkg.path = target_path
else:
self.stdio and getattr(self.stdio, 'error', print)('same file')
return None
self.db[pkg.md5] = pkg
self.stdio and getattr(self.stdio, 'verbose', print)('dump PackageInfo')
if self._dump_db():
self.stdio and getattr(self.stdio, 'print', print)('add %s to local mirror', src_path)
return pkg
except IOError:
self.self.stdio and getattr(self.self.stdio, 'exception', print)('')
self.stdio and getattr(self.stdio, 'error', print)('Set local mirror failed. %s IO Error' % pkg.file_name)
except:
self.stdio and getattr(self.stdio, 'exception', print)('')
self.stdio and getattr(self.stdio, 'error', print)('Unable to add %s as local mirror' % pkg.file_name)
return None
def get_all_pkg_info(self):
return [self.db[key] for key in self.db]
def get_rpm_pkg_by_info(self, pkg_info):
self.stdio and getattr(self.stdio, 'verbose', print)('get RPM package by %s' % pkg_info)
return Package(pkg_info.path)
def get_pkgs_info(self, **pattern):
matchs = self.get_pkgs_info_with_score(**pattern)
if matchs:
return [info for info in sorted(matchs, key=lambda x: x[1], reversed=True)]
return matchs
def get_best_pkg_info(self, **pattern):
matchs = self.get_pkgs_info_with_score(**pattern)
if matchs:
return Package(max(matchs, key=lambda x: x[1])[0].path)
return None
def get_exact_pkg_info(self, **pattern):
self.stdio and getattr(self.stdio, 'verbose', print)('check md5 in pattern or not')
if 'md5' in pattern and pattern['md5']:
return self.db[pattern['md5']] if pattern['md5'] in self.db else None
self.stdio and getattr(self.stdio, 'verbose', print)('check name in pattern or not')
if 'name' not in pattern and not pattern['name']:
return None
name = pattern['name']
self.stdio and getattr(self.stdio, 'verbose', print)('check arch in pattern or not')
arch = getArchList(pattern['arch']) if 'arch' in pattern and pattern['arch'] else _ARCH
self.stdio and getattr(self.stdio, 'verbose', print)('check release in pattern or not')
release = pattern['release'] if 'release' in pattern else None
self.stdio and getattr(self.stdio, 'verbose', print)('check version in pattern or not')
version = pattern['version'] if 'version' in pattern else None
for key in self.db:
info = self.db[key]
if info.name != name:
continue
if info.arch not in arch:
continue
if release and info.release != release:
continue
if version and version != info.version:
continue
return info
return None
def get_best_pkg_info_with_score(self, **pattern):
matchs = self.get_pkgs_info_with_score(**pattern)
if matchs:
return max(matchs, key=lambda x: x[1])
return None
def get_pkgs_info_with_score(self, **pattern):
matchs = []
self.stdio and getattr(self.stdio, 'verbose', print)('check md5 in pattern or not')
if 'md5' in pattern and pattern['md5']:
return [self.db[pattern['md5']], (0xfffffffff, )] if pattern['md5'] in self.db else matchs
self.stdio and getattr(self.stdio, 'verbose', print)('check name in pattern or not')
if 'name' not in pattern and not pattern['name']:
return matchs
self.stdio and getattr(self.stdio, 'verbose', print)('check arch in pattern or not')
if 'arch' in pattern and pattern['arch']:
pattern['arch'] = getArchList(pattern['arch'])
else:
pattern['arch'] = _ARCH
self.stdio and getattr(self.stdio, 'verbose', print)('check version in pattern or not')
if 'version' in pattern and pattern['version']:
pattern['version'] += '.'
for key in self.db:
info = self.db[key]
if pattern['name'] in info.name:
matchs.append([info, self.match_score(info, **pattern)])
return matchs
def match_score(self, info, name, arch, version=None):
if info.arch not in arch:
return [0, ]
info_version = '%s.' % info.version
if version and info_version.find(version) != 0:
return [0 ,]
c = info.version.split('.')
c.insert(0, len(name) / len(info.name))
return c
def get_info_list(self):
return [self.db[key] for key in self.db]
class MirrorRepositoryManager(Manager):
RELATIVE_PATH = 'mirror'
def __init__(self, home_path, stdio=None):
super(MirrorRepositoryManager, self).__init__(home_path, stdio=stdio)
self.remote_path = os.path.join(self.path, 'remote') # rpm remote mirror cache
self.local_path = os.path.join(self.path, 'local')
self.is_init = self.is_init and self._mkdir(self.remote_path) and self._mkdir(self.local_path)
self._local_mirror = None
@property
def local_mirror(self):
if self._local_mirror is None:
self._local_mirror = LocalMirrorRepository(self.local_path, self.stdio)
return self._local_mirror
def get_remote_mirrors(self):
mirrors = []
for path in glob(os.path.join(self.remote_path, '*.repo')):
repo_age = os.stat(path)[8]
with open(path, 'r') as confpp_obj:
parser = ConfigParser()
parser.readfp(confpp_obj)
for section in parser.sections():
if section in ['main', 'installed']:
continue
bad = RemoteMirrorRepository.validate_repoid(section)
if bad:
continue
meta_data = {}
for attr in parser.options(section):
value = parser.get(section, attr)
meta_data[attr] = value
if 'enabled' in meta_data and not meta_data['enabled']:
continue
if 'name' not in meta_data:
meta_data['name'] = section
if 'repo_age' not in meta_data:
meta_data['repo_age'] = repo_age
meta_data['name'] = RemoteMirrorRepository.var_replace(meta_data['name'], _SERVER_VARS)
mirror_path = os.path.join(self.remote_path, meta_data['name'])
mirror = RemoteMirrorRepository(mirror_path, meta_data, self.stdio)
mirrors.append(mirror)
return mirrors
def get_mirrors(self):
mirros = self.get_remote_mirrors()
mirros.append(self.local_mirror)
return mirros
def get_exact_pkg(self, **pattern):
only_info = 'only_info' in pattern and pattern['only_info']
mirrors = self.get_mirrors()
for mirror in mirrors:
info = mirror.get_exact_pkg_info(**pattern)
if info:
return info if only_info else mirror.get_rpm_pkg_by_info(info)
return None
def get_best_pkg(self, **pattern):
if 'fuzzy' not in pattern or not pattern['fuzzy']:
return self.get_exact_pkg(**pattern)
only_info = 'only_info' in pattern and pattern['only_info']
mirrors = self.get_mirrors()
best = None
source_mirror = None
for mirror in mirrors:
t_best = mirror.get_best_pkg_info_with_score(**pattern)
if best is None:
best = t_best
source_mirror = mirror
elif t_best[1] > best[1]:
best = t_best
source_mirror = mirror
if best:
return best[0] if only_info else source_mirror.get_rpm_pkg_by_info(best[0])
def add_remote_mirror(self, src):
pass
def add_local_mirror(self, src, force=False):
self.stdio and getattr(self.stdio, 'verbose', print)('%s is file or not' % src)
if not os.path.isfile(src):
self.stdio and getattr(self.stdio, 'error', print)('%s does not exist or no such file: %s' % src)
return None
try:
self.stdio and getattr(self.stdio, 'verbose', print)('load %s to Package Object' % src)
pkg = Package(src)
except:
self.stdio and getattr(self.stdio, 'exception', print)('')
self.stdio and getattr(self.stdio, 'error', print)('failed to extract info from %s' % src)
return None
if self.local_mirror.exist_pkg(pkg) and not force:
if not self.stdio:
return None
if not getattr(self.stdio, 'confirm', False):
return None
if not self.stdio.confirm('mirror %s existed. Do you want to overwrite?' % pkg.file_name):
return None
self.stdio and getattr(self.stdio, 'print', print)('%s' % pkg)
return self.local_mirror.add_pkg(pkg)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
from enum import Enum
from glob import glob
from copy import deepcopy
from _manager import Manager
from tool import ConfigUtil, DynamicLoading, YamlLoader
yaml = YamlLoader()
class PluginType(Enum):
START = 'StartPlugin'
PARAM = 'ParamPlugin'
INSTALL = 'InstallPlugin'
PY_SCRIPT = 'PyScriptPlugin'
class Plugin(object):
PLUGIN_TYPE = None
FLAG_FILE = None
def __init__(self, component_name, plugin_path, version):
if not self.PLUGIN_TYPE or not self.FLAG_FILE:
raise NotImplementedError
self.component_name = component_name
self.plugin_path = plugin_path
self.version = version.split('.')
def __str__(self):
return '%s-%s-%s' % (self.component_name, self.PLUGIN_TYPE.name.lower(), '.'.join(self.version))
@property
def mirror_type(self):
return self.PLUGIN_TYPE
class PluginReturn(object):
def __init__(self, value=False, *arg, **kwargs):
self._return_value = value
self._return_args = arg
self._return_kwargs = kwargs
def __nonzero__(self):
return self.__bool__()
def __bool__(self):
return True if self._return_value else False
@property
def value(self):
return self._return_value
@property
def args(self):
return self._return_args
@property
def kwargs(self):
return self._return_kwargs
def get_return(self, key):
if key in self.kwargs:
return self.kwargs[key]
return None
def set_args(self, *args):
self._return_args = args
def set_kwargs(self, **kwargs):
self._return_kwargs = kwargs
def set_return(self, value):
self._return_value = value
def return_true(self, *args, **kwargs):
self.set_return(True)
self.set_args(*args)
self.set_kwargs(**kwargs)
def return_false(self, *args, **kwargs):
self.set_return(False)
self.set_args(*args)
self.set_kwargs(**kwargs)
class PluginContext(object):
def __init__(self, components, clients, cluster_config, cmd, options, stdio):
self.components = components
self.clients = clients
self.cluster_config = cluster_config
self.cmd = cmd
self.options = options
self.stdio = stdio
self._return = PluginReturn()
def get_return(self):
return self._return
def return_true(self, *args, **kwargs):
self._return.return_true(*args, **kwargs)
def return_false(self, *args, **kwargs):
self._return.return_false(*args, **kwargs)
class SubIO(object):
def __init__(self, stdio):
self.stdio = getattr(stdio, 'sub_io', lambda: None)()
self._func = {}
def __del__(self):
self.before_close()
def _temp_function(self, *arg, **kwargs):
pass
def __getattr__(self, name):
if name not in self._func:
self._func[name] = getattr(self.stdio, name, self._temp_function)
return self._func[name]
class ScriptPlugin(Plugin):
class ClientForScriptPlugin(object):
def __init__(self, client, stdio):
self.client = client
self.stdio = stdio
def __getattr__(self, key):
def new_method(*args, **kwargs):
kwargs['stdio'] = self.stdio
return attr(*args, **kwargs)
attr = getattr(self.client, key)
if hasattr(attr, '__call__'):
return new_method
return attr
def __init__(self, component_name, plugin_path, version):
super(ScriptPlugin, self).__init__(component_name, plugin_path, version)
self.context = None
def __call__(self):
raise NotImplementedError
def _import(self, stdio=None):
raise NotImplementedError
def _export(self):
raise NotImplementedError
def __del__(self):
self._export()
def before_do(self, components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
self._import(stdio)
sub_stdio = SubIO(stdio)
sub_clients = {}
for server in clients:
sub_clients[server] = ScriptPlugin.ClientForScriptPlugin(clients[server], sub_stdio)
self.context = PluginContext(components, sub_clients, cluster_config, cmd, options, sub_stdio)
def after_do(self, stdio, *arg, **kwargs):
self._export(stdio)
self.context = None
def pyScriptPluginExec(func):
def _new_func(self, components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
self.before_do(components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs)
if self.module:
method_name = func.__name__
method = getattr(self.module, method_name, False)
if method:
try:
method(self.context, *arg, **kwargs)
except Exception as e:
stdio and getattr(stdio, 'exception', print)('%s RuntimeError: %s' % (self, e))
pass
ret = self.context.get_return() if self.context else PluginReturn()
self.after_do(stdio, *arg, **kwargs)
return ret
return _new_func
class PyScriptPlugin(ScriptPlugin):
LIBS_PATH = []
PLUGIN_COMPONENT_NAME = None
def __init__(self, component_name, plugin_path, version):
if not self.PLUGIN_COMPONENT_NAME:
raise NotImplementedError
super(PyScriptPlugin, self).__init__(component_name, plugin_path, version)
self.module = None
self.libs_path = deepcopy(self.LIBS_PATH)
self.libs_path.append(self.plugin_path)
def __call__(self, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
method = getattr(self, self.PLUGIN_COMPONENT_NAME, False)
if method:
return method(clients, cluster_config, cmd, options, stdio, *arg, **kwargs)
else:
raise NotImplementedError
def _import(self, stdio=None):
if self.module is None:
DynamicLoading.add_libs_path(self.libs_path)
self.module = DynamicLoading.import_module(self.PLUGIN_COMPONENT_NAME, stdio)
def _export(self, stdio=None):
if self.module:
DynamicLoading.remove_libs_path(self.libs_path)
DynamicLoading.export_module(self.PLUGIN_COMPONENT_NAME, stdio)
# this is PyScriptPlugin demo
# class InitPlugin(PyScriptPlugin):
# FLAG_FILE = 'init.py'
# PLUGIN_COMPONENT_NAME = 'init'
# PLUGIN_TYPE = PluginType.INIT
# def __init__(self, component_name, plugin_path, version):
# super(InitPlugin, self).__init__(component_name, plugin_path, version)
# @pyScriptPluginExec
# def init(self, components, ssh_clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
# pass
class ParamPlugin(Plugin):
class ConfigItem(object):
def __init__(self, name, default=None, require=False, need_restart=False, need_redeploy=False):
self.name = name
self.default = default
self.require = require
self.need_restart = need_restart
self.need_redeploy = need_redeploy
PLUGIN_TYPE = PluginType.PARAM
DEF_PARAM_YAML = 'parameter.yaml'
FLAG_FILE = DEF_PARAM_YAML
def __init__(self, component_name, plugin_path, version):
super(ParamPlugin, self).__init__(component_name, plugin_path, version)
self.def_param_yaml_path = os.path.join(self.plugin_path, self.DEF_PARAM_YAML)
self._src_data = None
@property
def params(self):
if self._src_data is None:
try:
self._src_data = {}
with open(self.def_param_yaml_path, 'rb') as f:
configs = yaml.load(f)
for conf in configs:
self._src_data[conf['name']] = ParamPlugin.ConfigItem(
conf['name'],
ConfigUtil.get_value_from_dict(conf, 'default', None),
ConfigUtil.get_value_from_dict(conf, 'require', False),
ConfigUtil.get_value_from_dict(conf, 'need_restart', False),
ConfigUtil.get_value_from_dict(conf, 'need_redeploy', False),
)
except:
pass
return self._src_data
def get_need_redeploy_items(self):
items = []
params = self.params
for name in params:
conf = params[name]
if conf.need_redeploy:
items.append(name)
return items
def get_need_restart_items(self):
items = []
params = self.params
for name in params:
conf = params[name]
if conf.need_restart:
items.append(name)
return items
def get_params_default(self):
temp = {}
params = self.params
for name in params:
conf = params[name]
temp[conf.name] = conf.default
return temp
class InstallPlugin(Plugin):
class FileItem(object):
def __init__(self, src_path, target_path, _type):
self.src_path = src_path
self.target_path = target_path
self.type = _type if _type else 'file'
PLUGIN_TYPE = PluginType.INSTALL
FILES_MAP_YAML = 'file_map.yaml'
FLAG_FILE = FILES_MAP_YAML
def __init__(self, component_name, plugin_path, version):
super(InstallPlugin, self).__init__(component_name, plugin_path, version)
self.file_map_path = os.path.join(self.plugin_path, self.FILES_MAP_YAML)
self._file_map = None
@property
def file_map(self):
if self._file_map is None:
try:
self._file_map = {}
with open(self.file_map_path, 'rb') as f:
file_map = yaml.load(f)
for data in file_map:
k = data['src_path']
if k[0] != '.':
k = '.%s' % os.path.join('/', k)
self._file_map[k] = InstallPlugin.FileItem(
k,
ConfigUtil.get_value_from_dict(data, 'target_path', k),
ConfigUtil.get_value_from_dict(data, 'type', None)
)
except:
pass
return self._file_map
def file_list(self):
file_map = self.file_map
return [file_map[k] for k in file_map]
class ComponentPluginLoader(object):
PLUGIN_TYPE = None
def __init__(self, home_path, plugin_type=PLUGIN_TYPE, stdio=None):
if plugin_type:
self.PLUGIN_TYPE = plugin_type
if not self.PLUGIN_TYPE:
raise NotImplementedError
self.plguin_cls = getattr(sys.modules[__name__], self.PLUGIN_TYPE.value, False)
if not self.plguin_cls:
raise ImportError(self.PLUGIN_TYPE.value)
self.stdio = stdio
self.path = home_path
self.component_name = os.path.split(self.path)[1]
self._plugins = {}
def get_plugins(self):
plugins = []
for flag_path in glob('%s/*/%s' % (self.path, self.plguin_cls.FLAG_FILE)):
if flag_path in self._plugins:
plugins.append(self._plugins[flag_path])
else:
path, _ = os.path.split(flag_path)
_, version = os.path.split(path)
plugin = self.plguin_cls(self.component_name, path, version)
self._plugins[flag_path] = plugin
plugins.append(plugin)
return plugins
def get_best_plugin(self, version):
version = version.split('.')
plugins = []
for plugin in self.get_plugins():
if plugin.version == version:
return plugin
if plugin.version < version:
plugins.append(plugin)
if plugins:
plugin = max(plugins, key=lambda x: x.version)
self.stdio and getattr(self.stdio, 'warn', print)(
'%s %s plugin version %s not found, use the best suitable version %s\n. Use `obd update` to update local plugin repository' %
(self.component_name, self.PLUGIN_TYPE.name.lower(), '.'.join(version), '.'.join(plugin.version))
)
return plugin
return None
class PyScriptPluginLoader(ComponentPluginLoader):
class PyScriptPluginType(object):
def __init__(self, name, value):
self.name = name
self.value = value
PLUGIN_TYPE = PluginType.PY_SCRIPT
def __init__(self, home_path, script_name=None, stdio=None):
if not script_name:
raise NotImplementedError
type_name = 'PY_SCRIPT_%s' % script_name.upper()
type_value = 'PyScript%sPlugin' % ''.join([word.capitalize() for word in script_name.split('_')])
self.PLUGIN_TYPE = PyScriptPluginLoader.PyScriptPluginType(type_name, type_value)
if not getattr(sys.modules[__name__], type_value, False):
self._create_(script_name)
super(PyScriptPluginLoader, self).__init__(home_path, stdio=stdio)
def _create_(self, script_name):
exec('''
class %s(PyScriptPlugin):
FLAG_FILE = '%s.py'
PLUGIN_COMPONENT_NAME = '%s'
def __init__(self, component_name, plugin_path, version):
super(%s, self).__init__(component_name, plugin_path, version)
@staticmethod
def set_plugin_type(plugin_type):
%s.PLUGIN_TYPE = plugin_type
@pyScriptPluginExec
def %s(self, components, ssh_clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
pass
''' % (self.PLUGIN_TYPE.value, script_name, script_name, self.PLUGIN_TYPE.value, self.PLUGIN_TYPE.value, script_name))
clz = locals()[self.PLUGIN_TYPE.value]
setattr(sys.modules[__name__], self.PLUGIN_TYPE.value, clz)
clz.set_plugin_type(self.PLUGIN_TYPE)
return clz
class PluginManager(Manager):
RELATIVE_PATH = 'plugins'
# The directory structure for plugin is ./plugins/{component_name}/{version}
def __init__(self, home_path, stdio=None):
super(PluginManager, self).__init__(home_path, stdio=stdio)
self.component_plugin_loaders = {}
self.py_script_plugin_loaders = {}
for plugin_type in PluginType:
self.component_plugin_loaders[plugin_type] = {}
# PyScriptPluginLoader is a customized script loader. It needs special processing.
# Log off the PyScriptPluginLoader in component_plugin_loaders
del self.component_plugin_loaders[PluginType.PY_SCRIPT]
def get_best_plugin(self, plugin_type, component_name, version):
if plugin_type not in self.component_plugin_loaders:
return None
loaders = self.component_plugin_loaders[plugin_type]
if component_name not in loaders:
loaders[component_name] = ComponentPluginLoader(os.path.join(self.path, component_name), plugin_type, self.stdio)
loader = loaders[component_name]
return loader.get_best_plugin(version)
# 主要用于获取自定义Python脚本插件
# 相比于get_best_plugin,该方法可以获取到未在PluginType中注册的Python脚本插件
# 这个功能可以快速实现自定义插件,只要在插件仓库创建对应的python文件,并暴露出同名方法即可
# 使后续进一步实现全部流程可描述更容易实现
def get_best_py_script_plugin(self, script_name, component_name, version):
if script_name not in self.py_script_plugin_loaders:
self.py_script_plugin_loaders[script_name] = {}
loaders = self.py_script_plugin_loaders[script_name]
if component_name not in loaders:
loaders[component_name] = PyScriptPluginLoader(os.path.join(self.path, component_name), script_name, self.stdio)
loader = loaders[component_name]
return loader.get_best_plugin(version)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import hashlib
from glob import glob
from _rpm import Package
from _arch import getBaseArch
from tool import DirectoryUtil, FileUtil, YamlLoader
from _manager import Manager
class LocalPackage(Package):
class RpmObject(object):
def __init__(self, headers, files):
self.files = files
self.opens = {}
self.headers = headers
def __exit__(self, *arg, **kwargs):
for path in self.opens:
self.opens[path].close()
def __enter__(self):
self.__exit__()
self.opens = {}
return self
def extractfile(self, name):
if name not in self.files:
raise KeyError("member %s could not be found" % name)
path = self.files[name]
if path not in self.opens:
self.opens[path] = open(path, 'rb')
return self.opens[path]
def __init__(self, path, name, version, files, release=None, arch=None):
self.name = name
self.version = version
self.md5 = None
self.release = release if release else version
self.arch = arch if arch else getBaseArch()
self.headers = {}
self.files = files
self.path = path
self.package()
def package(self):
count = 0
dirnames = []
filemd5s = []
filemodes = []
basenames = []
dirindexes = []
filelinktos = []
dirnames_map = {}
m_sum = hashlib.md5()
for src_path in self.files:
target_path = self.files[src_path]
dirname, basename = os.path.split(src_path)
if dirname not in dirnames_map:
dirnames.append(dirname)
dirnames_map[dirname] = count
count += 1
basenames.append(basename)
dirindexes.append(dirnames_map[dirname])
if os.path.islink(target_path):
filemd5s.append('')
filelinktos.append(os.readlink(target_path))
filemodes.append(-24065)
else:
m = hashlib.md5()
with open(target_path, 'rb') as f:
m.update(f.read())
m_value = m.hexdigest().encode(sys.getdefaultencoding())
m_sum.update(m_value)
filemd5s.append(m_value)
filelinktos.append('')
filemodes.append(os.stat(target_path).st_mode)
self.headers = {
'dirnames': dirnames,
'filemd5s': filemd5s,
'filemodes': filemodes,
'basenames': basenames,
'dirindexes': dirindexes,
'filelinktos': filelinktos,
}
self.md5 = m_sum.hexdigest()
def open(self):
return self.RpmObject(self.headers, self.files)
class Repository(object):
_DATA_FILE = '.data'
def __init__(self, name, repository_dir, stdio=None):
self.repository_dir = repository_dir
self.name = name
self.version = None
self.hash = None
self.stdio = stdio
self._load()
def __str__(self):
return '%s-%s-%s' % (self.name, self.version, self.hash)
def __hash__(self):
return hash(self.repository_dir)
def is_shadow_repository(self):
if os.path.exists(self.repository_dir):
return os.path.islink(self.repository_dir)
return False
@property
def data_file_path(self):
path = os.readlink(self.repository_dir) if os.path.islink(self.repository_dir) else self.repository_dir
return os.path.join(path, Repository._DATA_FILE)
# 暂不清楚开源的rpm requirename是否仅有必须的依赖
def require_list(self):
return []
# 暂不清楚开源的rpm requirename是否仅有必须的依赖 故先使用 ldd检查bin文件的形式检查依赖
def bin_list(self, plugin):
files = []
if self.version and self.hash:
for file_item in plugin.file_list():
if file_item.type == 'bin':
files.append(os.path.join(self.repository_dir, file_item.target_path))
return files
def file_list(self, plugin):
files = []
if self.version and self.hash:
for file_item in plugin.file_list():
files.append(os.path.join(self.repository_dir, file_item.target_path))
return files
def file_check(self, plugin):
for file_path in self.file_list(plugin):
if not os.path.exists(file_path):
return False
return True
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.version == other.version and self.hash == other.hash
if isinstance(other, dict):
return self.version == other['version'] and self.hash == other['hash']
def _load(self):
try:
with open(self.data_file_path, 'r') as f:
data = YamlLoader().load(f)
self.version = data['version']
self.hash = data['hash']
except:
pass
def _parse_path(self):
if self.is_shadow_repository():
path = os.readlink(self.repository_dir)
else:
path = self.repository_dir
path = path.strip('/')
path, _hash = os.path.split(path)
path, version = os.path.split(path)
if not self.version:
self.version = version
def _dump(self):
data = {'version': self.version, 'hash': self.hash}
try:
with open(self.data_file_path, 'w') as f:
YamlLoader().dump(data, f)
return True
except:
self.stdio and getattr(self.stdio, 'exception', print)('dump %s to %s failed' % (data, self.data_file_path))
return False
def load_pkg(self, pkg, plugin):
if self.is_shadow_repository():
self.stdio and getattr(self.stdio, 'print', '%s is a shadow repository' % self)
return False
hash_path = os.path.join(self.repository_dir, '.hash')
if self.hash == pkg.md5 and self.file_check(plugin):
return True
self.clear()
try:
file_map = plugin.file_map
with pkg.open() as rpm:
files = {}
links = {}
dirnames = rpm.headers.get("dirnames")
basenames = rpm.headers.get("basenames")
dirindexes = rpm.headers.get("dirindexes")
filelinktos = rpm.headers.get("filelinktos")
filemd5s = rpm.headers.get("filemd5s")
filemodes = rpm.headers.get("filemodes")
for i in range(len(basenames)):
path = os.path.join(dirnames[dirindexes[i]], basenames[i])
if isinstance(path, bytes):
path = path.decode()
if not path.startswith('./'):
path = '.%s' % path
files[path] = i
for src_path in file_map:
if src_path not in files:
raise Exception('%s not found in packge' % src_path)
idx = files[src_path]
file_item = file_map[src_path]
target_path = os.path.join(self.repository_dir, file_item.target_path)
if filemd5s[idx]:
fd = rpm.extractfile(src_path)
self.stdio and getattr(self.stdio, 'verbose', print)('extract %s to %s' % (src_path, target_path))
with FileUtil.open(target_path, 'wb', self.stdio) as f:
FileUtil.copy_fileobj(fd, f)
mode = filemodes[idx] & 0x1ff
if mode != 0o744:
os.chmod(target_path, mode)
elif filelinktos[idx]:
links[target_path] = filelinktos[idx]
else:
raise Exception('%s is directory' % src_path)
for link in links:
self.stdio and getattr(self.stdio, 'verbose', print)('link %s to %s' % (src_path, target_path))
os.symlink(links[link], link)
self.version = pkg.version
self.hash = pkg.md5
if self._dump():
return True
else:
self.clear()
except:
self.stdio and getattr(self.stdio, 'exception', print)('failed to extract file from %s' % pkg.path)
self.clear()
return False
def clear(self):
return DirectoryUtil.rm(self.repository_dir, self.stdio) and DirectoryUtil.mkdir(self.repository_dir, stdio=self.stdio)
class ComponentRepository(object):
def __init__(self, name, repository_dir, stdio=None):
self.repository_dir = repository_dir
self.stdio = stdio
self.name = name
DirectoryUtil.mkdir(self.repository_dir, stdio=stdio)
def get_instance_repositories(self, version):
repositories = {}
for tag in os.listdir(self.repository_dir):
path = os.path.join(self.repository_dir, tag)
if os.path.islink(path):
continue
repository = Repository(self.name, path, self.stdio)
if repository.hash:
repositories[repository.hash] = repository
return repositories
def get_shadow_repositories(self, version, instance_repositories={}):
repositories = {}
for tag in os.listdir(self.repository_dir):
path = os.path.join(self.repository_dir, tag)
if not os.path.islink(path):
continue
_, md5 = os.path.split(os.readlink(path))
if md5 in instance_repositories:
repositories[tag] = instance_repositories[md5]
else:
repository = Repository(self.name, path, self.stdio)
if repository.hash:
repositories[repository.hash] = repository
return repositories
def get_repository_by_version(self, version, tag=None):
path_partten = os.path.join(self.repository_dir, version, tag if tag else '*')
for path in glob(path_partten):
repository = Repository(self.name, path, self.stdio)
if repository.hash:
return repository
return None
def get_repository_by_tag(self, tag, version=None):
path_partten = os.path.join(self.repository_dir, version if version else '*', tag)
for path in glob(path_partten):
repository = Repository(self.name, path, self.stdio)
if repository.hash:
return repository
return None
def get_repository(self, version=None, tag=None):
if version:
return self.get_repository_by_version(version, tag)
version = []
for rep_version in os.listdir(self.repository_dir):
rep_version = rep_version.split('.')
if rep_version > version:
version = rep_version
if version:
return self.get_repository_by_version('.'.join(version), tag)
return None
class RepositoryManager(Manager):
RELATIVE_PATH = 'repository'
# repository目录结构为 ./repository/{component_name}/{version}/{tag or hash}
def __init__(self, home_path, stdio=None):
super(RepositoryManager, self).__init__(home_path, stdio=stdio)
self.repositories = {}
self.component_repositoies = {}
def get_repositoryies(self, name):
repositories = {}
path_partten = os.path.join(self.path, name, '*')
for path in glob(path_partten):
_, version = os.path.split(path)
Repository = Repository(name, path, version, self.stdio)
def get_repository_by_version(self, name, version, tag=None, instance=True):
if not tag:
tag = name
path = os.path.join(self.path, name, version, tag)
if path not in self.repositories:
if name not in self.component_repositoies:
self.component_repositoies[name] = ComponentRepository(name, os.path.join(self.path, name), self.stdio)
repository = self.component_repositoies[name].get_repository(version, tag)
if repository:
self.repositories[repository.repository_dir] = repository
self.repositories[path] = repository
else:
repository = self.repositories[path]
return self.get_instance_repository_from_shadow(repository) if instance else repository
def get_repository(self, name, version=None, tag=None, instance=True):
if version:
return self.get_repository_by_version(name, version, tag)
if not tag:
tag = name
if name not in self.component_repositoies:
path = os.path.join(self.path, name)
self.component_repositoies[name] = ComponentRepository(name, path, self.stdio)
repository = self.component_repositoies[name].get_repository(version, tag)
if repository:
self.repositories[repository.repository_dir] = repository
return self.get_instance_repository_from_shadow(repository) if repository and instance else repository
def create_instance_repository(self, name, version, _hash):
path = os.path.join(self.path, name, version, _hash)
if path not in self.repositories:
self._mkdir(path)
repository = Repository(name, path, self.stdio)
self.repositories[path] = repository
return self.repositories[path]
def get_repository_allow_shadow(self, name, version, tag=None):
path = os.path.join(self.path, name, version, tag if tag else name)
if os.path.exists(path):
if path not in self.repositories:
self.repositories[path] = Repository(name, path, self.stdio)
return self.repositories[path]
repository = Repository(name, path, self.stdio)
repository.version = version
return repository
def create_tag_for_repository(self, repository, tag, force=False):
if repository.is_shadow_repository():
return False
path = os.path.join(self.path, repository.name, repository.version, tag)
if os.path.exists(path):
if not os.path.islink(path):
return False
src_path = os.readlink(path)
if os.path.normcase(src_path) == os.path.normcase(repository.repository_dir):
return True
if not force:
return False
DirectoryUtil.rm(path)
try:
os.symlink(repository.repository_dir, path)
return True
except:
pass
return False
def get_instance_repository_from_shadow(self, repository):
if not isinstance(repository, Repository) or not repository.is_shadow_repository():
return repository
try:
path = os.readlink(repository.repository_dir)
if path not in self.repositories:
self.repositories[path] = Repository(repository.name, path, self.stdio)
return self.repositories[path]
except:
pass
return None
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import rpmfile
# py3和py2 的 lzma 模块不同
# python3 标准库中有 lzma
# python2下没有lzma这个三方库,而是使用动态库。pip 提供了pyliblzma这个三方库,这个也是动态库。与centos python自带的是一样的
# python2的lzma的api与py3 的不同
# python2 三方库中backports.lzma的api与python3的相同
# 但rpmfile中 只会尝试 import lzma。这在python2下 只能
# 故这里先import rpmfile, 在为rpmfile 注入 正确的 lzma依赖
if sys.version_info.major == 2:
from backports import lzma
setattr(sys.modules['rpmfile'], 'lzma', getattr(sys.modules[__name__], 'lzma'))
class Package(object):
def __init__(self, path):
self.path = path
with self.open() as rpm:
self.name = rpm.headers.get('name').decode()
self.version = rpm.headers.get('version').decode()
self.release = rpm.headers.get('release').decode()
self.arch = rpm.headers.get('arch').decode()
self.md5 = rpm.headers.get('md5').decode()
def __str__(self):
return 'name: %s\nversion: %s\nrelease:%s\narch: %s\nmd5: %s' % (self.name, self.version, self.release, self.arch, self.md5)
@property
def file_name(self):
return '%s-%s-%s.%s.rpm' % (self.name, self.version, self.release, self.arch)
def open(self):
return rpmfile.open(self.path)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import traceback
from enum import Enum
from halo import Halo, cursor
from colorama import Fore
from prettytable import PrettyTable
from progressbar import Bar, ETA, FileTransferSpeed, Percentage, ProgressBar
if sys.version_info.major == 3:
raw_input = input
input = lambda msg: int(raw_input(msg))
class BufferIO(object):
def __init__(self):
self._buffer = []
def write(self, s):
self._buffer.append(s)
def read(self):
s = ''.join(self._buffer)
self._buffer = []
return s
class FormtatText(object):
@staticmethod
def format(text, color):
return color + text + Fore.RESET
@staticmethod
def info(text):
return FormtatText.format(text, Fore.BLUE)
@staticmethod
def success(text):
return FormtatText.format(text, Fore.GREEN)
@staticmethod
def warning(text):
return FormtatText.format(text, Fore.YELLOW)
@staticmethod
def error(text):
return FormtatText.format(text, Fore.RED)
class LogSymbols(Enum):
INFO = FormtatText.info('!')
SUCCESS = FormtatText.success('ok')
WARNING = FormtatText.warning('!!')
ERROR = FormtatText.error('x')
class IOTable(PrettyTable):
@property
def align(self):
"""Controls alignment of fields
Arguments:
align - alignment, one of "l", "c", or "r" """
return self._align
@align.setter
def align(self, val):
if not self._field_names:
self._align = {}
elif isinstance(val, dict):
val_map = val
for field in self._field_names:
if field in val_map:
val = val_map[field]
self._validate_align(val)
else:
val = 'l'
self._align[field] = val
else:
if val:
self._validate_align(val)
else:
val = 'l'
for field in self._field_names:
self._align[field] = val
class IOHalo(Halo):
def __init__(self, text='', color='cyan', text_color=None, spinner='line', animation=None, placement='right', interval=-1, enabled=True, stream=sys.stdout):
super(IOHalo, self).__init__(text=text, color=color, text_color=text_color, spinner=spinner, animation=animation, placement=placement, interval=interval, enabled=enabled, stream=stream)
def start(self, text=None):
if getattr(self._stream, 'isatty', lambda : False)():
return super(IOHalo, self).start(text=text)
else:
text and self._stream.write(text)
def stop_and_persist(self, symbol=' ', text=None):
if getattr(self._stream, 'isatty', lambda : False)():
return super(IOHalo, self).stop_and_persist(symbol=symbol, text=text)
else:
self._stream.write(' %s\n' % symbol)
def succeed(self, text=None):
return self.stop_and_persist(symbol=LogSymbols.SUCCESS.value, text=text)
def fail(self, text=None):
return self.stop_and_persist(symbol=LogSymbols.ERROR.value, text=text)
def warn(self, text=None):
return self.stop_and_persist(symbol=LogSymbols.WARNING.value, text=text)
def info(self, text=None):
return self.stop_and_persist(symbol=LogSymbols.INFO.value, text=text)
class IOProgressBar(ProgressBar):
def __init__(self, maxval=None, text='', term_width=None, poll=1, left_justify=True, stream=None):
widgets=['%s: ' % text, Percentage(), ' ',
Bar(marker='#', left='[', right=']'),
' ', ETA(), ' ', FileTransferSpeed()]
super(IOProgressBar, self).__init__(maxval=maxval, widgets=widgets, term_width=term_width, poll=poll, left_justify=left_justify, fd=stream)
def start(self):
self._hide_cursor()
return super(IOProgressBar, self).start()
def update(self, value=None):
return super(IOProgressBar, self).update(value=value)
def finish(self):
self._show_cursor()
return super(IOProgressBar, self).finish()
def _need_update(self):
return (self.currval == self.maxval or self.currval == 0 or getattr(self.fd, 'isatty', lambda : False)()) \
and super(IOProgressBar, self)._need_update()
def _check_stream(self):
if self.fd.closed:
return False
try:
check_stream_writable = self.fd.writable
except AttributeError:
pass
else:
return check_stream_writable()
return True
def _hide_cursor(self):
"""Disable the user's blinking cursor
"""
if self._check_stream() and self.fd.isatty():
cursor.hide(stream=self.fd)
def _show_cursor(self):
"""Re-enable the user's blinking cursor
"""
if self._check_stream() and self.fd.isatty():
cursor.show(stream=self.fd)
class MsgLevel(object):
CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10
VERBOSE = DEBUG
NOTSET = 0
class IO(object):
WIDTH = 64
VERBOSE_LEVEL = 0
WARNING_PREV = FormtatText.warning('[WARN]')
ERROR_PREV = FormtatText.error('[ERROR]')
def __init__(self, level, msg_lv=MsgLevel.DEBUG, trace_logger=None, track_limit=0, root_io=None, stream=sys.stdout):
self.level = level
self.msg_lv = msg_lv
self.trace_logger = trace_logger
self._root_io = root_io
self.track_limit = track_limit
self._verbose_prefix = '-' * self.level
self.sub_ios = {}
self.sync_obj = None
self._out_obj = None if self._root_io else stream
self._cur_out_obj = self._out_obj
self._before_critical = None
def before_close(self):
if self._before_critical:
try:
self._before_critical(self)
except:
pass
def __del__(self):
self.before_close()
def get_cur_out_obj(self):
if self._root_io:
return self._root_io.get_cur_out_obj()
return self._cur_out_obj
def _start_buffer_io(self):
if self._root_io:
return False
if self._cur_out_obj != self._out_obj:
return False
self._cur_out_obj = BufferIO()
return True
def _stop_buffer_io(self):
if self._root_io:
return False
if self._cur_out_obj == self._out_obj:
return False
text = self._cur_out_obj.read()
self._cur_out_obj = self._out_obj
if text:
self.print(text)
return True
@staticmethod
def set_verbose_level(level):
IO.VERBOSE_LEVEL = level
def _start_sync_obj(self, sync_clz, before_critical, *arg, **kwargs):
if self._root_io:
return self._root_io._start_sync_obj(sync_clz, before_critical, *arg, **kwargs)
if self.sync_obj:
return None
if not self._start_buffer_io():
return None
kwargs['stream'] = self._out_obj
try:
self.sync_obj = sync_clz(*arg, **kwargs)
self._before_critical = before_critical
except Exception as e:
self._stop_buffer_io()
raise e
return self.sync_obj
def _clear_sync_ctx(self):
self._stop_buffer_io()
self.sync_obj = None
self._before_critical = None
def _stop_sync_obj(self, sync_clz, stop_type, *arg, **kwargs):
if self._root_io:
ret = self._root_io._stop_sync_obj(sync_clz, stop_type, *arg, **kwargs)
self._clear_sync_ctx()
else:
if not isinstance(self.sync_obj, sync_clz):
return False
try:
ret = getattr(self.sync_obj, stop_type)(*arg, **kwargs)
except Exception as e:
raise e
finally:
self._clear_sync_ctx()
return ret
def start_loading(self, text, *arg, **kwargs):
if self.sync_obj:
return False
self.sync_obj = self._start_sync_obj(IOHalo, lambda x: x.stop_loading('fail'), *arg, **kwargs)
if self.sync_obj:
self._log(MsgLevel.INFO, text)
return self.sync_obj.start(text)
def stop_loading(self, stop_type, *arg, **kwargs):
if not isinstance(self.sync_obj, IOHalo):
return False
if getattr(self.sync_obj, stop_type, False):
return self._stop_sync_obj(IOHalo, stop_type, *arg, **kwargs)
else:
return self._stop_sync_obj(IOHalo, 'stop')
def start_progressbar(self, text, maxval):
if self.sync_obj:
return False
self.sync_obj = self._start_sync_obj(IOProgressBar, lambda x: x.finish_progressbar(), text=text, maxval=maxval)
if self.sync_obj:
self._log(MsgLevel.INFO, text)
return self.sync_obj.start()
def update_progressbar(self, value):
if not isinstance(self.sync_obj, IOProgressBar):
return False
return self.sync_obj.update(value)
def finish_progressbar(self):
if not isinstance(self.sync_obj, IOProgressBar):
return False
return self._stop_sync_obj(IOProgressBar, 'finish')
def sub_io(self, pid=None, msg_lv=None):
if not pid:
pid = os.getpid()
if msg_lv is None:
msg_lv = self.msg_lv
key = "%s-%s" % (pid, msg_lv)
if key not in self.sub_ios:
self.sub_ios[key] = IO(
self.level + 1,
msg_lv=msg_lv,
trace_logger=self.trace_logger,
track_limit=self.track_limit,
root_io=self._root_io if self._root_io else self
)
return self.sub_ios[key]
def print_list(self, ary, field_names=None, exp=lambda x: x if isinstance(x, list) else [x], show_index=False, start=0, **kwargs):
if not ary:
return
show_index = field_names is not None and show_index
if show_index:
show_index.insert(0, 'idx')
table = IOTable(field_names, **kwargs)
for row in ary:
row = exp(row)
if show_index:
row.insert(start)
start += 1
table.add_row(row)
self.print(table)
def confirm(self, msg):
while True:
try:
ans = raw_input('%s [y/n]: ' % msg)
if ans == 'y':
return True
if ans == 'n':
return False
except:
pass
def _format(self, msg, *args):
if args:
msg = msg % args
return msg
def _print(self, msg_lv, msg, *args, **kwargs):
if msg_lv < self.msg_lv:
return
kwargs['file'] = self.get_cur_out_obj()
kwargs['file'] and print(self._format(msg, *args), **kwargs)
del kwargs['file']
self._log(msg_lv, msg, *args, **kwargs)
def _log(self, levelno, msg, *args, **kwargs):
self.trace_logger and self.trace_logger.log(levelno, msg, *args, **kwargs)
def print(self, msg, *args, **kwargs):
self._print(MsgLevel.INFO, msg, *args, **kwargs)
def warn(self, msg, *args, **kwargs):
self._print(MsgLevel.WARN, '%s %s' % (self.WARNING_PREV, msg), *args, **kwargs)
def error(self, msg, *args, **kwargs):
self._print(MsgLevel.ERROR, '%s %s' % (self.ERROR_PREV, msg), *args, **kwargs)
def critical(self, msg, *args, **kwargs):
if self._root_io:
return self.critical(msg, *args, **kwargs)
self._print(MsgLevel.CRITICAL, '%s %s' % (self.ERROR_PREV, msg), *args, **kwargs)
self.exit(kwargs['code'] if 'code' in kwargs else 255)
def exit(self, code):
self.before_close()
sys.exit(code)
def verbose(self, msg, *args, **kwargs):
if self.level > self.VERBOSE_LEVEL:
self._log(MsgLevel.VERBOSE, '%s %s' % (self._verbose_prefix, msg), *args, **kwargs)
return
self._print(MsgLevel.VERBOSE, '%s %s' % (self._verbose_prefix, msg), *args, **kwargs)
if sys.version_info.major == 2:
def exception(self, msg, *args, **kwargs):
import linecache
exception_msg = []
ei = sys.exc_info()
exception_msg.append('Traceback (most recent call last):')
stack = traceback.extract_stack()[self.track_limit:-2]
tb = ei[2]
while tb is not None:
f = tb.tb_frame
lineno = tb.tb_lineno
co = f.f_code
filename = co.co_filename
name = co.co_name
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
tb = tb.tb_next
stack.append((filename, lineno, name, line))
for line in stack:
exception_msg.append(' File "%s", line %d, in %s' % line[:3])
if line[3]: exception_msg.append(' ' + line[3].strip())
lines = []
for line in traceback.format_exception_only(ei[0], ei[1]):
lines.append(line)
if lines:
exception_msg.append(''.join(lines))
if self.level <= self.VERBOSE_LEVEL:
msg = '%s\n%s' % (msg, '\n'.join(exception_msg))
self.error(msg)
else:
msg and self.error(msg)
self._log(MsgLevel.VERBOSE, '\n'.join(exception_msg))
else:
def exception(self, msg, *args, **kwargs):
ei = sys.exc_info()
traceback_e = traceback.TracebackException(type(ei[1]), ei[1], ei[2], limit=None)
pre_stach = traceback.extract_stack()[self.track_limit:-2]
pre_stach.reverse()
for summary in pre_stach:
traceback_e.stack.insert(0, summary)
lines = []
for line in traceback_e.format(chain=True):
lines.append(line)
if self.level <= self.VERBOSE_LEVEL:
msg = '%s\n%s' % (msg, ''.join(lines))
self.error(msg)
else:
msg and self.error(msg)
self._log(MsgLevel.VERBOSE, ''.join(lines))
# /bin/bash
if [ `id -u` != 0 ] ; then
echo "Please use root to run"
fi
obd_dir=`dirname $0`
python_bin='/usr/bin/python'
python_path=`whereis python`
for bin in ${python_path[@]}; do
if [ -x $bin ]; then
python_bin=$bin
break 1
fi
done
read -p "Enter python path [default $python_bin]:"
if [ "x$REPLY" != "x" ]; then
python_bin=$REPLY
fi
rm -fr /usr/obd && mkdir -p /usr/obd
rm -fr $obd_dir/mirror/remote -p $obd_dir/mirror/remote && cd $obd_dir/mirror/remote
wget http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
cp -r -d $obd_dir/* /usr/obd
cd /usr/obd/plugins && ln -sf oceanbase oceanbase-ce
cp -f /usr/obd/profile/obd.sh /etc/profile.d/obd.sh
rm -fr /usr/bin/obd
echo -e "# /bin/bash\n$python_bin /usr/obd/_cmd.py \$*" > /usr/bin/obd
chmod +x /usr/bin/obd
echo -e 'Installation of obd finished successfully\nPlease source /etc/profile.d/obd.sh to enable it'
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import re
import os
import sys
import time
import fcntl
from optparse import Values
import tempfile
from subprocess import call as subprocess_call
from prettytable import PrettyTable
from halo import Halo
from ssh import SshClient, SshConfig
from tool import ConfigUtil, FileUtil, DirectoryUtil, YamlLoader
from _stdio import MsgLevel
from _mirror import MirrorRepositoryManager
from _plugin import PluginManager, PluginType
from _repository import RepositoryManager, LocalPackage
from _deploy import DeployManager, DeployStatus, DeployConfig, DeployConfigStatus
class ObdHome(object):
HOME_LOCK_RELATIVE_PATH = 'obd.conf'
def __init__(self, home_path, stdio=None, lock=True):
self.home_path = home_path
self._lock = None
self._home_conf = None
self._mirror_manager = None
self._repository_manager = None
self._deploy_manager = None
self._plugin_manager = None
self.stdio = None
self._stdio_func = None
lock and self.lock()
self.set_stdio(stdio)
def lock(self):
if self._lock is None:
self._lock = FileUtil.open(os.path.join(self.home_path, self.HOME_LOCK_RELATIVE_PATH), 'w')
fcntl.flock(self._lock, fcntl.LOCK_EX | fcntl.LOCK_NB)
def unlock(self):
try:
if self._lock is None:
fcntl.flock(self._lock, fcntl.LOCK_UN)
except:
pass
def __del__(self):
self.unlock()
@property
def mirror_manager(self):
if not self._mirror_manager:
self._mirror_manager = MirrorRepositoryManager(self.home_path, self.stdio)
return self._mirror_manager
@property
def repository_manager(self):
if not self._repository_manager:
self._repository_manager = RepositoryManager(self.home_path, self.stdio)
return self._repository_manager
@property
def plugin_manager(self):
if not self._plugin_manager:
self._plugin_manager = PluginManager(self.home_path, self.stdio)
return self._plugin_manager
@property
def deploy_manager(self):
if not self._deploy_manager:
self._deploy_manager = DeployManager(self.home_path, self.stdio)
return self._deploy_manager
def set_stdio(self, stdio):
def _print(msg, *arg, **kwarg):
sep = kwarg['sep'] if 'sep' in kwarg else None
end = kwarg['end'] if 'end' in kwarg else None
return print(msg, sep='' if sep is None else sep, end='\n' if end is None else end)
self.stdio = stdio
self._stdio_func = {}
if not self.stdio:
return
for func in ['start_loading', 'stop_loading', 'print', 'confirm', 'verbose', 'warn', 'exception', 'error', 'critical', 'print_list']:
self._stdio_func[func] = getattr(self.stdio, func, _print)
def _call_stdio(self, func, msg, *arg, **kwarg):
if func not in self._stdio_func:
return None
return self._stdio_func[func](msg, *arg, **kwarg)
def add_mirror(self, src, opts):
if re.match('^https?://', src):
return self.mirror_manager.add_remote_mirror(src)
else:
return self.mirror_manager.add_local_mirror(src, getattr(opts, 'force', False))
def deploy_param_check(self, repositories, deploy_config):
# parameter check
errors = []
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
for server in cluster_config.servers:
self._call_stdio('verbose', '%s %s param check' % (server, repository))
need_items = cluster_config.get_unconfigured_require_item(server)
if need_items:
errors.append('%s %s need config: %s' % (server, repository.name, ','.join(need_items)))
return errors
def get_clients(self, deploy_config, repositories):
ssh_clients = {}
self._call_stdio('start_loading', 'Open ssh connection')
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
# ssh check
self.ssh_clients_connect(ssh_clients, cluster_config.servers, deploy_config.user)
self._call_stdio('stop_loading', 'succeed')
return ssh_clients
def ssh_clients_connect(self, ssh_clients, servers, user_config):
for server in servers:
if server.ip not in ssh_clients:
ssh_clients[server] = SshClient(
SshConfig(
server.ip,
user_config.username,
user_config.password,
user_config.key_file,
user_config.port,
user_config.timeout
),
self.stdio
)
ssh_clients[server].connect()
def search_plugin(self, repository, plugin_type, no_found_exit=True):
self._call_stdio('verbose', 'Search %s plugin for %s' % (plugin_type.name.lower(), repository))
plugin = self.plugin_manager.get_best_plugin(plugin_type, repository.name, repository.version)
if plugin:
self._call_stdio('verbose', 'Found for %s for %s-%s' % (plugin, repository.name, repository.version))
else:
if no_found_exit:
self._call_stdio('critical', 'No such %s plugin for %s-%s' % (plugin_type.name.lower(), repository.name, repository.version))
else:
self._call_stdio('warn', 'No such %s plugin for %s-%s' % (plugin_type.name.lower(), repository.name, repository.version))
return plugin
def search_plugins(self, repositories, plugin_type, no_found_exit=True):
plugins = {}
self._call_stdio('verbose', 'Searching %s plugin for components ...', plugin_type.name.lower())
for repository in repositories:
plugin = self.search_plugin(repository, plugin_type, no_found_exit)
if plugin:
plugins[repository] = plugin
elif no_found_exit:
return None
return plugins
def search_py_script_plugin(self, repositories, script_name, no_found_exit=True):
plugins = {}
self._call_stdio('verbose', 'Searching %s plugin for components ...', script_name)
for repository in repositories:
self._call_stdio('verbose', 'Searching %s plugin for %s' % (script_name, repository))
plugin = self.plugin_manager.get_best_py_script_plugin(script_name, repository.name, repository.version)
if plugin:
plugins[repository] = plugin
self._call_stdio('verbose', 'Found for %s for %s-%s' % (plugin, repository.name, repository.version))
else:
if no_found_exit:
self._call_stdio('critical', 'No such %s plugin for %s-%s' % (script_name, repository.name, repository.version))
break
else:
self._call_stdio('warn', 'No such %s plugin for %s-%s' % (script_name, repository.name, repository.version))
return plugins
def search_components_from_mirrors(self, deploy_config, fuzzy_match=False, only_info=True):
pkgs = []
errors = []
repositories = []
self._call_stdio('verbose', 'Search package for components...')
for component in deploy_config.components:
config = deploy_config.components[component]
# First, check if the component exists in the repository. If exists, check if the version is available. If so, use the repository directly.
self._call_stdio('verbose', 'Get %s repository' % component)
repository = self.repository_manager.get_repository(component, config.version, config.package_hash if config.package_hash else config.tag)
self._call_stdio('verbose', 'Check %s version for the repository' % repository)
if repository and repository.hash:
repositories.append(repository)
self._call_stdio('verbose', 'Use repository %s' % repository)
self._call_stdio('print', '%s-%s already installed' % (repository.name, repository.version))
continue
self._call_stdio('verbose', 'Search %s package from mirror' % component)
pkg = self.mirror_manager.get_best_pkg(name=component, version=config.version, md5=config.package_hash, fuzzy_match=fuzzy_match, only_info=only_info)
if pkg:
self._call_stdio('verbose', 'Package %s-%s is available.' % (pkg.name, pkg.version))
if config.version and pkg.version != config.version:
self._call_stdio('warn', 'No such package %s-%s. Use similar package %s-%s.' % (component, config.version, pkg.name, pkg.version))
else:
self._call_stdio('print', 'Package %s-%s is available' % (pkg.name, pkg.version))
repository = self.repository_manager.get_repository(pkg.name, pkg.md5)
if repository:
repositories.append(repository)
else:
pkgs.append(pkg)
else:
pkg_name = [component]
if config.version:
pkg_name.append(config.version)
if config.package_hash:
pkg_name.append(config.package_hash)
elif config.tag:
pkg_name.append(config.tag)
errors.append('No such package %s.' % ('-'.join(pkg_name)))
return pkgs, repositories, errors
def load_local_repositories(self, deploy_config, allow_shadow=True):
return self.get_local_repositories(deploy_config.components, allow_shadow)
def get_local_repositories(self, components, allow_shadow=True):
repositories = []
if allow_shadow:
get_repository = self.repository_manager.get_repository_allow_shadow
else:
get_repository = self.repository_manager.get_repository
for component_name in components:
cluster_config = components[component_name]
self._call_stdio('verbose', 'Get local repository %s-%s-%s' % (component_name, cluster_config.version, cluster_config.tag))
repository = get_repository(component_name, cluster_config.version, cluster_config.package_hash if cluster_config.package_hash else cluster_config.tag)
if repository:
repositories.append(repository)
else:
self._call_stdio('critical', 'Local repository %s-%s-%s is empty.' % (component_name, cluster_config.version, cluster_config.tag))
return repositories
def search_param_plugin_and_apply(self, repositories, deploy_config):
self._call_stdio('verbose', 'Searching param plugin for components ...')
for repository in repositories:
plugin = self.search_plugin(repository, PluginType.PARAM, False)
if plugin:
self._call_stdio('verbose', 'Applying %s for %s' % (plugin, repository))
cluster_config = deploy_config.components[repository.name]
cluster_config.update_temp_conf(plugin.params)
def edit_deploy_config(self, name):
def confirm(msg):
if self.stdio:
self._call_stdio('print', msg)
if self._call_stdio('confirm', 'edit?'):
return True
return False
def is_deployed():
return deploy and deploy.deploy_info.status not in [DeployStatus.STATUS_CONFIGURED, DeployStatus.STATUS_DESTROYED]
def is_server_list_change(deploy_config):
for component_name in deploy_config.components:
if deploy_config.components[component_name].servers != deploy.deploy_config.components[component_name].servers:
return True
return False
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
initial_config = ''
if deploy:
try:
if deploy.deploy_info.config_status == DeployConfigStatus.UNCHNAGE:
path = deploy.deploy_config.yaml_path
else:
path = deploy.get_temp_deploy_yaml_path(deploy.config_dir)
self._call_stdio('verbose', 'Load %s' % path)
with open(path, 'r') as f:
initial_config = f.read()
except:
self._call_stdio('exception', '')
msg = 'Save deploy "%s" configuration' % name
else:
if not self.stdio:
return False
if not self._call_stdio('confirm', 'No such deploy: %s. Create?' % name):
return False
msg = 'Create deploy "%s" configuration' % name
EDITOR = os.environ.get('EDITOR','vi')
self._call_stdio('verbose', 'Get environment variable EDITOR=%s' % EDITOR)
self._call_stdio('verbose', 'Create tmp yaml file')
tf = tempfile.NamedTemporaryFile(suffix=".yaml")
tf.write(initial_config.encode())
tf.flush()
while True:
tf.seek(0)
self._call_stdio('verbose', '%s %s' % (EDITOR, tf.name))
subprocess_call([EDITOR, tf.name])
self._call_stdio('verbose', 'Load %s' % tf.name)
deploy_config = DeployConfig(tf.name, YamlLoader(self.stdio))
self._call_stdio('verbose', 'Configure component change check')
if not deploy_config.components:
if self._call_stdio('confirm', 'Empty configuration'):
continue
return False
self._call_stdio('verbose', 'Information check for the configuration component.')
if not deploy:
config_status = DeployConfigStatus.NEED_REDEPLOY
elif is_deployed():
if deploy_config.components.keys() != deploy.deploy_config.components.keys():
if confirm('Modifying the component list of a deployed cluster is not permitted.'):
continue
return False
if is_server_list_change(deploy_config):
if confirm('Modifying the server list of a deployed cluster is not permitted.'):
continue
return False
success = True
for component_name in deploy_config.components:
old_cluster_config = deploy.deploy_config.components[component_name]
new_cluster_config = deploy_config.components[component_name]
if new_cluster_config.version and new_cluster_config.version != old_cluster_config.version:
success = False
break
if new_cluster_config.package_hash and new_cluster_config.package_hash != old_cluster_config.package_hash:
success = False
break
if not success:
if confirm('Modifying the version and hash of the component is not permitted.'):
continue
return False
pkgs, repositories, errors = self.search_components_from_mirrors(deploy_config)
# Loading the parameter plugins that are available to the application
self._call_stdio('start_loading', 'Search param plugin and load')
for repository in repositories:
self._call_stdio('verbose', 'Search param plugin for %s' % repository)
plugin = self.plugin_manager.get_best_plugin(PluginType.PARAM, repository.name, repository.version)
if plugin:
self._call_stdio('verbose', 'Load param plugin for %s' % repository)
deploy_config.components[repository.name].update_temp_conf(plugin.params)
if deploy and repository.name in deploy.deploy_config.components:
deploy.deploy_config.components[repository.name].update_temp_conf(plugin.params)
for pkg in pkgs:
self._call_stdio('verbose', 'Search param plugin for %s' % pkg)
plugin = self.plugin_manager.get_best_plugin(PluginType.PARAM, pkg.name, pkg.version)
if plugin:
self._call_stdio('verbose', 'load param plugin for %s' % pkg)
deploy_config.components[pkg.name].update_temp_conf(plugin.params)
if deploy and pkg.name in deploy.deploy_config.components:
deploy.deploy_config.components[pkg.name].update_temp_conf(plugin.params)
self._call_stdio('stop_loading', 'succeed')
# Parameter check
self._call_stdio('start_loading', 'Parameter check')
errors = self.deploy_param_check(repositories, deploy_config) + self.deploy_param_check(pkgs, deploy_config)
self._call_stdio('stop_loading', 'fail' if errors else 'succeed')
if errors:
if confirm('\n'.join(errors)):
continue
return False
self._call_stdio('verbose', 'configure change check')
if initial_config and initial_config == tf.read().decode(errors='replace'):
config_status = deploy.deploy_info.config_status if deploy else DeployConfigStatus.UNCHNAGE
self._call_stdio('print', 'Deploy "%s" config %s' % (name, config_status.value))
return True
config_status = DeployConfigStatus.UNCHNAGE
if is_deployed():
for component_name in deploy_config.components:
if config_status == DeployConfigStatus.NEED_REDEPLOY:
break
old_cluster_config = deploy.deploy_config.components[component_name]
new_cluster_config = deploy_config.components[component_name]
if old_cluster_config == new_cluster_config:
continue
if config_status == DeployConfigStatus.UNCHNAGE:
config_status = DeployConfigStatus.NEED_RELOAD
for server in old_cluster_config.servers:
if old_cluster_config.get_need_redeploy_items(server) != new_cluster_config.get_need_redeploy_items(server):
config_status = DeployConfigStatus.NEED_REDEPLOY
break
if old_cluster_config.get_need_restart_items(server) != new_cluster_config.get_need_restart_items(server):
config_status = DeployConfigStatus.NEED_RESTART
if deploy.deploy_info.status == DeployStatus.STATUS_DEPLOYED and config_status != DeployConfigStatus.NEED_REDEPLOY:
config_status = DeployConfigStatus.UNCHNAGE
break
self._call_stdio('verbose', 'Set deploy configuration status to %s' % config_status)
self._call_stdio('verbose', 'Save new configuration yaml file')
if config_status == DeployConfigStatus.UNCHNAGE:
ret = self.deploy_manager.create_deploy_config(name, tf.name).update_deploy_config_status(config_status)
else:
target_src_path = deploy.get_temp_deploy_yaml_path(deploy.config_dir)
old_config_status = deploy.deploy_info.config_status
try:
if deploy.update_deploy_config_status(config_status):
FileUtil.copy(tf.name, target_src_path, self.stdio)
ret = True
if deploy:
if deploy.deploy_info.status == DeployStatus.STATUS_RUNNING or (
config_status == DeployConfigStatus.NEED_REDEPLOY and is_deployed()
):
msg += '\ndeploy "%s"' % config_status.value
except Exception as e:
deploy.update_deploy_config_status(old_config_status)
self._call_stdio('exception', 'Copy %s to %s failed, error: \n%s' % (tf.name, target_src_path, e))
msg += ' failed'
ret = False
self._call_stdio('print', msg)
tf.close()
return ret
def list_deploy(self):
self._call_stdio('verbose', 'Get deploy list')
deploys = self.deploy_manager.get_deploy_configs()
if deploys:
self._call_stdio('print_list', deploys,
['Name', 'Configuration Path', 'Status (Cached)'],
lambda x: [x.name, x.config_dir, x.deploy_info.status.value],
title='Cluster List',
)
else:
self._call_stdio('print', 'Local deploy is empty')
return True
def get_install_plugin_and_install(self, repositories, pkgs):
# Check if the component contains the installation plugins
install_plugins = self.search_plugins(repositories, PluginType.INSTALL)
if install_plugins is None:
return None
temp = self.search_plugins(pkgs, PluginType.INSTALL)
if temp is None:
return None
for pkg in temp:
repository = self.repository_manager.create_instance_repository(pkg.name, pkg.version, pkg.md5)
install_plugins[repository] = temp[pkg]
# Install for local
# self._call_stdio('print', 'install package for local ...')
for pkg in pkgs:
self._call_stdio('start_loading', 'install %s-%s for local' % (pkg.name, pkg.version))
# self._call_stdio('verbose', 'install %s-%s for local' % (pkg.name, pkg.version))
repository = self.repository_manager.create_instance_repository(pkg.name, pkg.version, pkg.md5)
if not repository.load_pkg(pkg, install_plugins[repository]):
self._call_stdio('stop_loading', 'fail')
self._call_stdio('error', 'Failed to extract file from %s' % pkg.path)
return None
self._call_stdio('stop_loading', 'succeed')
self.repository_manager.create_tag_for_repository(repository, pkg.name)
repositories.append(repository)
return install_plugins
def install_lib_for_repositories(self, repositories):
data = {}
temp_map = {}
for repository in repositories:
lib_name = '%s-libs' % repository.name
data[lib_name] = {'global': {
'version': repository.version
}}
temp_map[lib_name] = repository
try:
with tempfile.NamedTemporaryFile(suffix=".yaml", mode='w') as tf:
yaml_loader = YamlLoader(self.stdio)
yaml_loader.dump(data, tf)
deploy_config = DeployConfig(tf.name, yaml_loader)
# Look for the best suitable mirrors for the components
self._call_stdio('verbose', 'Search best suitable repository libs')
pkgs, lib_repositories, errors = self.search_components_from_mirrors(deploy_config, only_info=False)
if errors:
self._call_stdio('error', '\n'.join(errors))
return False
# Get the installation plugin and install locally
install_plugins = self.get_install_plugin_and_install(lib_repositories, pkgs)
if not install_plugins:
return False
repositories_lib_map = {}
for lib_repository in lib_repositories:
repository = temp_map[lib_repository.name]
install_plugin = install_plugins[lib_repository]
repositories_lib_map[repository] = {
'repositories': lib_repository,
'install_plugin': install_plugin
}
return repositories_lib_map
except:
self._call_stdio('exception', 'Failed to create lib-repo config file')
pass
return False
def servers_repository_install(self, ssh_clients, servers, repository, install_plugin):
self._call_stdio('start_loading', 'Remote %s repository install' % repository)
self._call_stdio('verbose', 'Remote %s repository integrity check' % repository)
for server in servers:
self._call_stdio('verbose', '%s %s repository integrity check' % (server, repository))
client = ssh_clients[server]
remote_home_path = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_repository_data_path = repository.data_file_path.replace(self.home_path, remote_home_path)
remote_repository_data = client.execute_command('cat %s' % remote_repository_data_path).stdout
self._call_stdio('verbose', '%s %s install check' % (server, repository))
try:
yaml_loader = YamlLoader(self.stdio)
data = yaml_loader.load(remote_repository_data)
if not data:
self._call_stdio('verbose', '%s %s need to be installed ' % (server, repository))
elif data == repository:
# Version sync. Check for damages (TODO)
self._call_stdio('verbose', '%s %s has installed ' % (server, repository))
continue
else:
self._call_stdio('verbose', '%s %s need to be updated' % (server, repository))
except:
self._call_stdio('verbose', '%s %s need to be installed ' % (server, repository))
for file_path in repository.file_list(install_plugin):
remote_file_path = file_path.replace(self.home_path, remote_home_path)
self._call_stdio('verbose', '%s %s installing' % (server, repository))
client.put_file(file_path, remote_file_path)
client.execute_command('chmod %s %s' % (oct(os.stat(file_path).st_mode)[-3: ], remote_file_path))
client.put_file(repository.data_file_path, remote_repository_data_path)
self._call_stdio('verbose', '%s %s installed' % (server, repository.name))
self._call_stdio('stop_loading', 'succeed')
def servers_repository_lib_check(self, ssh_clients, servers, repository, install_plugin, msg_lv='error'):
ret = True
self._call_stdio('start_loading', 'Remote %s repository lib check' % repository)
for server in servers:
self._call_stdio('verbose', '%s %s repository lib check' % (server, repository))
client = ssh_clients[server]
need_libs = set()
remote_home_path = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_repository_path = repository.repository_dir.replace(self.home_path, remote_home_path)
remote_repository_data_path = repository.data_file_path.replace(self.home_path, remote_home_path)
client.add_env('LD_LIBRARY_PATH', '%s/lib:' % remote_repository_path, True)
for file_path in repository.bin_list(install_plugin):
remote_file_path = file_path.replace(self.home_path, remote_home_path)
libs = client.execute_command('ldd %s' % remote_file_path).stdout
need_libs.update(re.findall('(/?[\w+\-/]+\.\w+[\.\w]+)[\s\\n]*\=\>[\s\\n]*not found', libs))
if need_libs:
for lib in need_libs:
self._call_stdio(msg_lv, '%s %s require: %s' % (server, repository, lib))
ret = False
client.add_env('LD_LIBRARY_PATH', '', True)
self._call_stdio('stop_loading', 'succeed' if ret else msg_lv)
return ret
def servers_apply_lib_repository_and_check(self, ssh_clients, deploy_config, repositories, repositories_lib_map):
ret = True
servers_obd_home = {}
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
lib_repository = repositories_lib_map[repository]['repositories']
install_plugin = repositories_lib_map[repository]['install_plugin']
self._call_stdio('print', 'Use %s for %s' % (lib_repository, repository))
for server in cluster_config.servers:
client = ssh_clients[server]
if server not in servers_obd_home:
servers_obd_home[server] = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_home_path = servers_obd_home[server]
remote_lib_repository_data_path = lib_repository.repository_dir.replace(self.home_path, remote_home_path)
# lib installation
self._call_stdio('verbose', 'Remote %s repository integrity check' % repository)
self.servers_repository_install(ssh_clients, cluster_config.servers, lib_repository, install_plugin)
for server in cluster_config.servers:
client = ssh_clients[server]
remote_home_path = servers_obd_home[server]
remote_repository_data_path = repository.repository_dir.replace(self.home_path, remote_home_path)
remote_lib_repository_data_path = lib_repository.repository_dir.replace(self.home_path, remote_home_path)
client.execute_command('ln -sf %s %s/lib' % (remote_lib_repository_data_path, remote_repository_data_path))
if self.servers_repository_lib_check(ssh_clients, cluster_config.servers, repository, install_plugin):
ret = False
for server in cluster_config.servers:
client = ssh_clients[server]
return ret
# If the cluster states are consistent, the status value is returned. Else False is returned.
def cluster_status_check(self, ssh_clients, deploy_config, repositories, ret_status={}):
status_plugins = self.search_py_script_plugin(repositories, 'status')
component_status = {}
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
self._call_stdio('verbose', 'Call %s for %s' % (status_plugins[repository], repository))
plugin_ret = status_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio)
cluster_status = plugin_ret.get_return('cluster_status')
ret_status[repository] = cluster_status
for server in cluster_status:
if repository not in component_status:
component_status[repository] = cluster_status[server]
continue
if component_status[repository] != cluster_status[server]:
self._call_stdio('verbose', '%s cluster status is inconsistent' % repository)
break
else:
continue
return False
status = None
for repository in component_status:
if status is None:
status = component_status[repository]
continue
if status != component_status[repository]:
self._call_stdio('verbose', 'Deploy status inconsistent')
return False
return status
def deploy_cluster(self, name, opt=Values()):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if deploy:
self._call_stdio('verbose', 'Get deploy info')
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'judge deploy status')
if deploy_info.status not in [DeployStatus.STATUS_CONFIGURED, DeployStatus.STATUS_DESTROYED]:
self._call_stdio('error', 'Deploy "%s" is %s. You could not realod an %s cluster.' % (name, deploy_info.status.value, deploy_info.status.value))
return False
if deploy_info.config_status != DeployConfigStatus.UNCHNAGE:
self._call_stdio('verbose', 'Apply temp deploy configuration')
if not deploy.apply_temp_deploy_config():
self._call_stdio('error', 'Failed to apply new deploy configuration')
return False
config_path = getattr(opt, 'config', '')
unuse_lib_repo = getattr(opt, 'unuselibrepo', False)
self._call_stdio('verbose', 'config path is None or not')
if config_path:
self._call_stdio('verbose', 'Create deploy by configuration path')
deploy = self.deploy_manager.create_deploy_config(name, config_path)
if not deploy:
self._call_stdio('error', 'Failed to create deploy: %s. please check you configuration file' % name)
return False
if not deploy:
self._call_stdio('error', 'No such deploy: %s. you can input configuration path to create a new deploy' % name)
return False
self._call_stdio('verbose', 'Get deploy configuration')
deploy_config = deploy.deploy_config
if not deploy_config:
self._call_stdio('error', 'Deploy configuration is empty.\nIt may be caused by a failure to resolve the configuration.\nPlease check your configuration file.')
return False
if not deploy_config.components:
self._call_stdio('error', 'Components not detected.\nPlease check the syntax of your configuration file.')
return False
for component_name in deploy_config.components:
if not deploy_config.components[component_name].servers:
self._call_stdio('error', '%s\'s servers list is empty.' % component_name)
return False
# Check the best suitable mirror for the components
self._call_stdio('verbose', 'Search best suitable repository')
pkgs, repositories, errors = self.search_components_from_mirrors(deploy_config, only_info=False)
if errors:
self._call_stdio('error', '\n'.join(errors))
return False
# Get the installation plugins. Install locally
install_plugins = self.get_install_plugin_and_install(repositories, pkgs)
if not install_plugins:
self._call_stdio('print', 'You could try using -f to force remove directory')
return False
self._call_stdio('print_list', repositories, ['Repository', 'Version', 'Md5'], lambda repository: [repository.name, repository.version, repository.hash], title='Packages')
errors = []
self._call_stdio('verbose', 'Repository integrity check')
for repository in repositories:
if not repository.file_check(install_plugins[repository]):
errors.append('%s intstall failed' % repository.name)
if errors:
self._call_stdio('error', '\n'.join(errors))
return False
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
# Parameter check
self._call_stdio('verbose', 'Cluster param configuration check')
errors = self.deploy_param_check(repositories, deploy_config)
if errors:
self._call_stdio('error', '\n'.join(errors))
return False
if unuse_lib_repo and not deploy_config.unuse_lib_repository:
deploy_config.set_unuse_lib_repository(True)
lib_not_found_msg_func = 'error' if deploy_config.unuse_lib_repository else 'print'
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
need_lib_repositories = []
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
# cluster files check
self.servers_repository_install(ssh_clients, cluster_config.servers, repository, install_plugins[repository])
# lib check
msg_lv = 'error' if deploy_config.unuse_lib_repository else 'warn'
if not self.servers_repository_lib_check(ssh_clients, cluster_config.servers, repository, install_plugins[repository], msg_lv):
need_lib_repositories.append(repository)
if need_lib_repositories:
if deploy_config.unuse_lib_repository:
# self._call_stdio('print', 'You could try using -U to work around the problem')
return False
self._call_stdio('print', 'Try to get lib-repository')
repositories_lib_map = self.install_lib_for_repositories(need_lib_repositories)
if repositories_lib_map is False:
self._call_stdio('error', 'Failed to install lib package for local')
return False
if self.servers_apply_lib_repository_and_check(ssh_clients, deploy_config, need_lib_repositories, repositories_lib_map):
self._call_stdio('error', 'Failed to install lib package for cluster servers')
return False
# Check the status for the deployed cluster
component_status = {}
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status is False or cluster_status == 1:
if self.stdio:
self._call_stdio('error', 'Some of the servers in the cluster have been started')
for repository in component_status:
cluster_status = component_status[repository]
for server in cluster_status:
if cluster_status[server] == 1:
self._call_stdio('print', '%s %s is started' % (server, repository.name))
return False
self._call_stdio('verbose', 'Search init plugin')
init_plugins = self.search_py_script_plugin(repositories, 'init', False)
component_num = len(repositories)
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
init_plugin = self.plugin_manager.get_best_py_script_plugin('init', repository.name, repository.version)
if repository in init_plugins:
init_plugin = init_plugins[repository]
self._call_stdio('verbose', 'Exec %s init plugin' % repository)
self._call_stdio('verbose', 'Apply %s for %s-%s' % (init_plugin, repository.name, repository.version))
if init_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], opt, self.stdio):
deploy.use_model(repository.name, repository, False)
component_num -= 1
else:
self._call_stdio('print', 'No such init plugin for %s' % repository.name)
if component_num == 0 and deploy.update_deploy_status(DeployStatus.STATUS_DEPLOYED):
self._call_stdio('print', '%s deployed' % name)
return True
return False
def start_cluster(self, name, cmd=[], options=Values()):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Deploy status judge')
if deploy_info.status not in [DeployStatus.STATUS_DEPLOYED, DeployStatus.STATUS_STOPPED, DeployStatus.STATUS_RUNNING]:
self._call_stdio('error', 'Deploy "%s" is %s. You could not start an %s cluster.' % (name, deploy_info.status.value, deploy_info.status.value))
return False
if deploy_info.config_status == DeployConfigStatus.NEED_REDEPLOY:
self._call_stdio('error', 'Deploy needs redeploy')
return False
if deploy_info.config_status != DeployConfigStatus.UNCHNAGE:
self._call_stdio('verbose', 'Apply temp deploy configuration')
if not deploy.apply_temp_deploy_config():
self._call_stdio('error', 'Failed to apply new deploy configuration')
return False
self._call_stdio('verbose', 'Get deploy config')
deploy_config = deploy.deploy_config
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.load_local_repositories(deploy_config, False)
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
# Check the status for the deployed cluster
component_status = {}
if DeployStatus.STATUS_RUNNING == deploy_info.status:
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status == 1:
self._call_stdio('print', 'Deploy "%s" is running' % name)
return True
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
# Parameter check
self._call_stdio('verbose', 'Cluster param config check')
errors = self.deploy_param_check(repositories, deploy_config)
if errors:
self._call_stdio('error', '\n'.join(errors))
return False
start_check_plugins = self.search_py_script_plugin(repositories, 'start_check', False)
start_plugins = self.search_py_script_plugin(repositories, 'start')
connect_plugins = self.search_py_script_plugin(repositories, 'connect')
bootstrap_plugins = self.search_py_script_plugin(repositories, 'bootstrap')
display_plugins = self.search_py_script_plugin(repositories, 'display')
self._call_stdio('stop_loading', 'succeed')
strict_check = getattr(options, 'strict_check', False)
success = True
for repository in repositories:
if repository not in start_check_plugins:
continue
cluster_config = deploy_config.components[repository.name]
self._call_stdio('verbose', 'Call %s for %s' % (start_check_plugins[repository], repository))
ret = start_check_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, cmd, options, self.stdio, alert_lv='error' if strict_check else 'warn')
if not ret:
success = False
if strict_check and success is False:
# self._call_stdio('verbose', 'Starting check failed. Use --skip-check to skip the starting check. However, this may lead to a starting failure.')
return False
component_num = len(repositories)
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
if not deploy_config.unuse_lib_repository:
for server in cluster_config.servers:
client = ssh_clients[server]
remote_home_path = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_repository_path = repository.repository_dir.replace(self.home_path, remote_home_path)
client.add_env('LD_LIBRARY_PATH', '%s/lib:' % remote_repository_path, True)
self._call_stdio('verbose', 'Call %s for %s' % (start_plugins[repository], repository))
ret = start_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, cmd, options, self.stdio, self.home_path, repository.repository_dir)
if ret:
need_bootstrap = ret.get_return('need_bootstrap')
else:
self._call_stdio('error', '%s start failed' % repository.name)
break
if not deploy_config.unuse_lib_repository:
for server in cluster_config.servers:
client = ssh_clients[server]
client.add_env('LD_LIBRARY_PATH', '', True)
self._call_stdio('verbose', 'Call %s for %s' % (connect_plugins[repository], repository))
ret = connect_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, cmd, options, self.stdio)
if ret:
db = ret.get_return('connect')
cursor = ret.get_return('cursor')
else:
self._call_stdio('error', 'Failed to connect %s' % repository.name)
break
if need_bootstrap:
self._call_stdio('print', 'Initialize cluster')
self._call_stdio('verbose', 'Call %s for %s' % (bootstrap_plugins[repository], repository))
if not bootstrap_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, cmd, options, self.stdio, cursor):
self._call_stdio('print', 'Cluster init failed')
break
self._call_stdio('verbose', 'Call %s for %s' % (display_plugins[repository], repository))
if display_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, cmd, options, self.stdio, cursor):
component_num -= 1
if component_num == 0:
self._call_stdio('verbose', 'Set %s deploy status to running' % name)
if deploy.update_deploy_status(DeployStatus.STATUS_RUNNING):
self._call_stdio('print', '%s running' % name)
return True
return False
def reload_cluster(self, name):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s. Input the configuration path to create a new deploy' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Deploy status judge')
if deploy_info.status != DeployStatus.STATUS_RUNNING:
self._call_stdio('error', 'Deploy "%s" is %s. You could not realod an %s cluster.' % (name, deploy_info.status.value, deploy_info.status.value))
return False
if deploy_info.config_status != DeployConfigStatus.NEED_RELOAD:
self._call_stdio('error', 'Deploy config %s' % deploy_info.config_status.value)
return False
self._call_stdio('verbose', 'Get deploy config')
deploy_config = deploy.deploy_config
self._call_stdio('verbose', 'Apply new deploy config')
new_deploy_config = DeployConfig(deploy.get_temp_deploy_yaml_path(deploy.config_dir), YamlLoader(self.stdio))
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.load_local_repositories(deploy_config)
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
self.search_param_plugin_and_apply(repositories, new_deploy_config)
reload_plugins = self.search_py_script_plugin(repositories, 'reload')
connect_plugins = self.search_py_script_plugin(repositories, 'connect')
self._call_stdio('stop_loading', 'succeed')
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
# Check the status for the deployed cluster
component_status = {}
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status is False or cluster_status == 0:
if self.stdio:
self._call_stdio('error', 'Some of the servers in the cluster have been stopped')
for repository in component_status:
cluster_status = component_status[repository]
for server in cluster_status:
if cluster_status[server] == 0:
self._call_stdio('print', '%s %s is stopped' % (server, repository.name))
return False
component_num = len(repositories)
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
new_cluster_config = new_deploy_config.components[repository.name]
self._call_stdio('verbose', 'Call %s for %s' % (connect_plugins[repository], repository))
ret = connect_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio)
if ret:
db = ret.get_return('connect')
cursor = ret.get_return('cursor')
else:
self._call_stdio('error', 'Failed to connect %s' % repository.name)
continue
self._call_stdio('verbose', 'Call %s for %s' % (reload_plugins[repository], repository))
if not reload_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, cursor, new_cluster_config):
continue
component_num -= 1
if component_num == 0:
if deploy.apply_temp_deploy_config():
self._call_stdio('print', '%s reload' % name)
return True
else:
deploy_config.dump()
self._call_stdio('warn', 'Some configuration items reload failed')
return False
def display_cluster(self, name):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Deploy status judge')
if deploy_info.status != DeployStatus.STATUS_RUNNING:
self._call_stdio('print', 'Deploy "%s" is %s' % (name, deploy_info.status.value))
return False
self._call_stdio('verbose', 'Get deploy config')
deploy_config = deploy.deploy_config
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.load_local_repositories(deploy_config)
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
connect_plugins = self.search_py_script_plugin(repositories, 'connect')
display_plugins = self.search_py_script_plugin(repositories, 'display')
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
self._call_stdio('stop_loading', 'succeed')
# Check the status for the deployed cluster
component_status = {}
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status is False or cluster_status == 0:
if self.stdio:
self._call_stdio('error', 'Some of the servers in the cluster have been stopped')
for repository in component_status:
cluster_status = component_status[repository]
for server in cluster_status:
if cluster_status[server] == 0:
self._call_stdio('print', '%s %s is stopped' % (server, repository.name))
return False
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
db = None
cursor = None
self._call_stdio('verbose', 'Call %s for %s' % (connect_plugins[repository], repository))
ret = connect_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio)
if ret:
db = ret.get_return('connect')
cursor = ret.get_return('cursor')
if not db:
self._call_stdio('error', 'Failed to connect %s' % repository.name)
return False
self._call_stdio('verbose', 'Call %s for %s' % (display_plugins[repository], repository))
display_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, cursor)
return True
def stop_cluster(self, name):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Check the deploy status')
if deploy_info.status != DeployStatus.STATUS_RUNNING:
self._call_stdio('error', 'Deploy "%s" is %s. You could not stop an %s cluster.' % (name, deploy_info.status.value, deploy_info.status.value))
return False
self._call_stdio('verbose', 'Get deploy config')
deploy_config = deploy.deploy_config
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.load_local_repositories(deploy_config)
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
stop_plugins = self.search_py_script_plugin(repositories, 'stop')
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
self._call_stdio('stop_loading', 'succeed')
component_num = len(repositories)
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
self._call_stdio('verbose', 'Call %s for %s' % (stop_plugins[repository], repository))
if stop_plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio):
component_num -= 1
self._call_stdio('verbose', 'Set %s deploy status to stopped' % name)
if component_num == 0 and deploy.update_deploy_status(DeployStatus.STATUS_STOPPED):
self._call_stdio('print', '%s stopped' % name)
return True
return False
def restart_cluster(self, name):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Check the deploy status')
if deploy_info.status == DeployStatus.STATUS_RUNNING and not self.stop_cluster(name):
return False
return self.start_cluster(name)
def redeploy_cluster(self, name):
return self.destroy_cluster(name) and self.deploy_cluster(name) and self.start_cluster(name)
def destroy_cluster(self, name, opt=Values()):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Check deploy status')
if deploy_info.status == DeployStatus.STATUS_RUNNING:
if not self.stop_cluster(name):
return False
elif deploy_info.status not in [DeployStatus.STATUS_STOPPED, DeployStatus.STATUS_DEPLOYED]:
self._call_stdio('error', 'Deploy "%s" is %s. You could not destroy an undeployed cluster' % (name, deploy_info.status.value))
return False
self._call_stdio('verbose', 'Get deploy configuration')
deploy_config = deploy.deploy_config
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.load_local_repositories(deploy_config)
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
plugins = self.search_py_script_plugin(repositories, 'destroy')
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
self._call_stdio('stop_loading', 'succeed')
# Check the status for the deployed cluster
component_status = {}
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status is False or cluster_status == 1:
force_kill = getattr(opt, 'force_kill', False)
msg_lv = 'warn' if force_kill else 'error'
self._call_stdio(msg_lv, 'Some of the servers in the cluster are running')
if force_kill:
self._call_stdio('verbose', 'Try to stop cluster')
status = deploy.deploy_info.status
deploy.update_deploy_status(DeployStatus.STATUS_RUNNING)
if not self.stop_cluster(name):
deploy.update_deploy_status(status)
self._call_stdio('error', 'Fail to stop cluster')
return False
else:
if self.stdio:
for repository in component_status:
cluster_status = component_status[repository]
for server in cluster_status:
if cluster_status[server] == 1:
self._call_stdio('print', '%s %s is running' % (server, repository.name))
self._call_stdio('print', 'You could try using -f to force kill process')
return False
for repository in repositories:
cluster_config = deploy_config.components[repository.name]
self._call_stdio('verbose', 'Call %s for %s' % (plugins[repository], repository))
plugins[repository](deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio)
self._call_stdio('verbose', 'Set %s deploy status to destroyed' % name)
if deploy.update_deploy_status(DeployStatus.STATUS_DESTROYED):
self._call_stdio('print', '%s destroyed' % name)
return True
return False
def create_repository(self, options):
force = getattr(options, 'force', False)
necessary = ['name', 'version', 'path']
attrs = options.__dict__
success = True
for key in necessary:
if key not in attrs or not attrs[key]:
success = False
self._call_stdio('error', 'option: %s is necessary' % key)
if success is False:
return False
plugin = self.plugin_manager.get_best_plugin(PluginType.INSTALL, attrs['name'], attrs['version'])
if plugin:
self._call_stdio('verbose', 'Found %s for %s-%s' % (plugin, attrs['name'], attrs['version']))
else:
self._call_stdio('error', 'No such %s plugin for %s-%s' % (PluginType.INSTALL.name.lower(), attrs['name'], attrs['version']))
return False
files = {}
success = True
repo_path = attrs['path']
for item in plugin.file_list():
path = os.path.join(repo_path, item.src_path)
path = os.path.normcase(path)
if not os.path.exists(path):
path = os.path.join(repo_path, item.target_path)
path = os.path.normcase(path)
if not os.path.exists(path):
self._call_stdio('error', 'need file: %s ' % path)
success = False
continue
files[item.src_path] = path
if success is False:
return False
self._call_stdio('start_loading', 'Package')
try:
pkg = LocalPackage(repo_path, attrs['name'], attrs['version'], files, getattr(options, 'release', None), getattr(options, 'arch', None))
self._call_stdio('stop_loading', 'succeed')
except:
self._call_stdio('exception', 'Package failed')
self._call_stdio('stop_loading', 'fail')
return False
self._call_stdio('print', pkg)
repository = self.repository_manager.get_repository_allow_shadow(attrs['name'], attrs['version'], pkg.md5)
if os.path.exists(repository.repository_dir):
if not force or not DirectoryUtil.rm(repository.repository_dir):
self._call_stdio('error', 'Repository(%s) exists' % repository.repository_dir)
return False
repository = self.repository_manager.create_instance_repository(attrs['name'], attrs['version'], pkg.md5)
if not repository.load_pkg(pkg, plugin):
self._call_stdio('error', 'Failed to extract file from %s' % pkg.path)
return False
if 'tag' in attrs and attrs['tag']:
for tag in attrs['tag'].split(','):
tag_repository = self.repository_manager.get_repository_allow_shadow(tag, attrs['version'])
self._call_stdio('verbose', 'Create tag(%s) for %s' % (tag, attrs['name']))
if not self.repository_manager.create_tag_for_repository(repository, tag, force):
self._call_stdio('error', 'Repository(%s) existed' % tag_repository.repository_dir)
return True
def mysqltest(self, name, opts):
self._call_stdio('verbose', 'Get Deploy by name')
deploy = self.deploy_manager.get_deploy_config(name)
if not deploy:
self._call_stdio('error', 'No such deploy: %s.' % name)
return False
deploy_info = deploy.deploy_info
self._call_stdio('verbose', 'Check deploy status')
if deploy_info.status != DeployStatus.STATUS_RUNNING:
self._call_stdio('print', 'Deploy "%s" is %s' % (name, deploy_info.status.value))
return False
self._call_stdio('verbose', 'Get deploy configuration')
deploy_config = deploy.deploy_config
if opts.component is None:
for component_name in ['obproxy', 'oceanbase', 'oceanbase-ce']:
if component_name in deploy_config.components:
opts.component = component_name
break
if opts.component not in deploy_config.components:
self._call_stdio('error', 'Can not find the component for mysqltest, use `--component` to select component')
return False
cluster_config = deploy_config.components[opts.component]
if not cluster_config.servers:
self._call_stdio('error', '%s server list is empty' % opts.component)
return False
if opts.test_server is None:
opts.test_server = cluster_config.servers[0]
else:
for server in cluster_config.servers:
if server.name == opts.test_server:
opts.test_server = server
break
else:
self._call_stdio('error', '%s is not a server in %s' % (opts.test_server, opts.component))
return False
if opts.auto_retry:
for component_name in ['oceanbase', 'oceanbase-ce']:
if component_name in deploy_config.components:
break
else:
opts.auto_retry = False
self._call_stdio('warn', 'Set auto-retry to false because of %s does not contain the configuration of oceanbase database' % name)
self._call_stdio('start_loading', 'Get local repositories and plugins')
# Get the repository
repositories = self.get_local_repositories({opts.component: deploy_config.components[opts.component]})
repository = repositories[0]
# Check whether the components have the parameter plugins and apply the plugins
self.search_param_plugin_and_apply(repositories, deploy_config)
# Get the client
ssh_clients = self.get_clients(deploy_config, repositories)
self._call_stdio('stop_loading', 'succeed')
# Check the status for the deployed cluster
component_status = {}
cluster_status = self.cluster_status_check(ssh_clients, deploy_config, repositories, component_status)
if cluster_status is False or cluster_status == 0:
if self.stdio:
self._call_stdio('error', 'Some of the servers in the cluster have been stopped')
for repository in component_status:
cluster_status = component_status[repository]
for server in cluster_status:
if cluster_status[server] == 0:
self._call_stdio('print', '%s %s is stopped' % (server, repository.name))
return False
connect_plugin = self.search_py_script_plugin(repositories, 'connect')[repository]
ret = connect_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, target_server=opts.test_server, sys_root=False)
if not ret or not ret.get_return('connect'):
self._call_stdio('error', 'Failed to connect to the server')
return False
db = ret.get_return('connect')
cursor = ret.get_return('cursor')
mysqltest_init_plugin = self.plugin_manager.get_best_py_script_plugin('init', 'mysqltest', repository.version)
mysqltest_check_opt_plugin = self.plugin_manager.get_best_py_script_plugin('check_opt', 'mysqltest', repository.version)
mysqltest_check_test_plugin = self.plugin_manager.get_best_py_script_plugin('check_test', 'mysqltest', repository.version)
mysqltest_run_test_plugin = self.plugin_manager.get_best_py_script_plugin('run_test', 'mysqltest', repository.version)
env = opts.__dict__
env['cursor'] = cursor
env['host'] = opts.test_server.ip
env['port'] = db.port
self._call_stdio('verbose', 'Call %s for %s' % (mysqltest_check_opt_plugin, repository))
ret = mysqltest_check_opt_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, env)
if not ret:
return False
self._call_stdio('verbose', 'Call %s for %s' % (mysqltest_check_test_plugin, repository))
ret = mysqltest_check_test_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, env)
if not ret:
self._call_stdio('error', 'Failed to get test set')
return False
if not env['test_set']:
self._call_stdio('error', 'Test set is empty')
return False
if env['need_init']:
self._call_stdio('verbose', 'Call %s for %s' % (mysqltest_init_plugin, repository))
if not mysqltest_init_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, env):
self._call_stdio('error', 'Failed to init for mysqltest')
return False
result = []
for test in env['test_set']:
ret = mysqltest_run_test_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, test, env)
if not ret:
break
case_result = ret.get_return('result')
if case_result['ret'] != 0 and opts.auto_retry:
cursor.close()
db.close()
if getattr(self.stdio, 'sub_io'):
stdio = self.stdio.sub_io(msg_lv=MsgLevel.ERROR)
else:
stdio = None
self._call_stdio('start_loading', 'Reboot')
obd = ObdHome(self.home_path, stdio=stdio, lock=False)
if obd.redeploy_cluster(name):
self._call_stdio('stop_loading', 'succeed')
else:
self._call_stdio('stop_loading', 'fail')
result.append(case_result)
break
connect_plugin = self.search_py_script_plugin(repositories, 'connect')[repository]
ret = connect_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, target_server=opts.test_server, sys_root=False)
if not ret or not ret.get_return('connect'):
self._call_stdio('error', 'Failed to connect server')
break
db = ret.get_return('connect')
cursor = ret.get_return('cursor')
env['cursor'] = cursor
self._call_stdio('verbose', 'Call %s for %s' % (mysqltest_init_plugin, repository))
if not mysqltest_init_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, env):
self._call_stdio('error', 'Failed to prepare for mysqltest')
break
ret = mysqltest_run_test_plugin(deploy_config.components.keys(), ssh_clients, cluster_config, [], {}, self.stdio, test, env)
if not ret:
break
case_result = ret.get_return('result')
result.append(case_result)
passcnt = len(list(filter(lambda x: x["ret"] == 0, result)))
totalcnt = len(env['test_set'])
failcnt = totalcnt - passcnt
if result:
self._call_stdio(
'print_list', result, ['Case', 'Cost (s)', 'Status'],
lambda x: [x['name'], '%.2f' % x['cost'], '\033[31mFAILED\033[0m' if x['ret'] else '\033[32mPASSED\033[0m'],
title='Result (Total %d, Passed %d, Failed %s)' % (totalcnt, passcnt, failcnt),
align={'Cost (s)': 'r'}
)
if failcnt:
self._call_stdio('print', 'Mysqltest failed')
else:
self._call_stdio('print', 'Mysqltest passed')
return True
return False
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
# In this example , support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
# In this example , support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
obproxy:
servers:
- 192.168.1.5
global:
listen_port: 2883
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.1.2:2883;192.168.1.3:2883;192.168.1.4:2883
enable_cluster_checkout: false
\ No newline at end of file
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 127.0.0.1
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: lo
mysql_port: 2883
rpc_port: 2882
zone: zone1
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
\ No newline at end of file
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
obproxy:
servers:
- 192.168.1.5
global:
listen_port: 2883
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.1.2:2883;192.168.1.3:2883;192.168.1.4:2883
enable_cluster_checkout: false
\ No newline at end of file
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 127.0.0.1
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: lo
mysql_port: 2883
rpc_port: 2882
zone: zone1
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
sys_bkgd_migration_retry_num: 3
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.3
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
mysql_port: 2883
rpc_port: 2882
zone: zone1
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.3
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
mysql_port: 2883
rpc_port: 2882
zone: zone1
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
obproxy:
servers:
- 192.168.1.2
global:
listen_port: 2883
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.1.3:2883
enable_cluster_checkout: false
\ No newline at end of file
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.3
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
mysql_port: 2883
rpc_port: 2882
zone: zone1
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
\ No newline at end of file
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.3
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
mysql_port: 2883
rpc_port: 2882
zone: zone1
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
obproxy:
servers:
- 192.168.1.2
global:
listen_port: 2883
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.1.3:2883
enable_cluster_checkout: false
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import logging
from logging import handlers
class Logger(logging.Logger):
def __init__(self, name, level=logging.DEBUG):
super(Logger, self).__init__(name, level)
self.buffer = []
self.buffer_size = 0
def _log(self, level, msg, args, end='\n', **kwargs):
return super(Logger, self)._log(level, msg, args, **kwargs)
\ No newline at end of file
MySQL-python==1.2.5
\ No newline at end of file
PyMySQL==1.0.2
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
from ssh import LocalClient
def check_opt(plugin_context, opt, *args, **kwargs):
stdio = plugin_context.stdio
server = opt['test_server']
obclient_bin = opt['obclient_bin']
mysqltest_bin = opt['mysqltest_bin']
if not server:
stdio.error('test server is None. please use `--test-server` to set')
return
ret = LocalClient.execute_command('%s --help' % obclient_bin, stdio=stdio)
if not ret:
stdio.error('%s\n%s is not an executable file. please use `--obclient-bin` to set.\nYou may not have obclient installed' % (ret.stderr, obclient_bin))
return
ret = LocalClient.execute_command('%s --help' % mysqltest_bin, stdio=stdio)
if not ret:
mysqltest_bin = opt['mysqltest_bin'] = 'mysqltest'
if not LocalClient.execute_command('%s --help' % mysqltest_bin, stdio=stdio):
stdio.error('%s\n%s is not an executable file. please use `--mysqltest-bin` to set\nYou may not have obclient installed' % (ret.stderr, mysqltest_bin))
return
if 'suite_dir' not in opt or not os.path.exists(opt['suite_dir']):
opt['suite_dir'] = os.path.join(os.path.split(__file__)[0], 'test_suite')
if 'all' in opt and opt['all']:
opt['suite'] = ','.join(os.listdir(opt['suite_dir']))
elif 'suite' in opt and opt['suite']:
opt['suite'] = opt['suite'].strip()
if 'slb' in opt:
opt['slb_host'], opt['slb_id'] = opt['slb'].split(',')
return plugin_context.return_true()
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
from glob import glob
from mysqltest_lib import case_filter, succtest
from mysqltest_lib.psmallsource import psmall_source
from mysqltest_lib.psmalltest import psmall_test
def check_test(plugin_context, opt, *args, **kwargs):
test_set = []
has_test_point = False
basename = lambda path: os.path.basename(path)
dirname =lambda path: os.path.dirname(path)
if 'all' in opt and opt['all'] and os.path.isdir(os.path.realpath(opt['suite_dir'])):
opt['suite'] = ','.join(os.listdir(os.path.realpath(opt['suite_dir'])))
if 'psmall' in opt and opt['psmall']:
test_set = psmall_test
opt['source_limit'] = psmall_source
elif 'suite' not in opt or not opt['suite']:
if 'test_set' in opt and opt['test_set']:
test_set = opt['test_set'].split(',')
has_test_point = True
else:
if not 'test_pattern' in opt or not opt['test_pattern']:
opt['test_pattern'] = '*.test'
else:
has_test_point = True
pat = os.path.join(opt['test_dir'], opt['test_pattern'])
test_set = [basename(test).rsplit('.', 1)[0] for test in glob(pat)]
else:
opt['test_dir_suite'] = [os.path.join(opt['suite_dir'], suite, 't') for suite in opt['suite'].split(',')]
opt['result_dir_suite'] = [os.path.join(opt['suite_dir'], suite, 'r') for suite in opt['suite'].split(',')]
has_test_point = True
for path in opt['test_dir_suite']:
suitename = basename(dirname(path))
if 'test_set' in opt and opt['test_set']:
test_set_tmp = [suitename + '.' + test for test in opt['test_set'].split(',')]
else:
if not 'test_pattern' in opt or not opt['test_pattern']:
opt['test_pattern'] = '*.test'
pat = os.path.join(path, opt['test_pattern'])
test_set_tmp = [suitename + '.' + basename(test).rsplit('.', 1)[0] for test in glob(pat)]
test_set.extend(test_set_tmp)
# exclude somt tests.
if 'exclude' not in opt or not opt['exclude']:
opt['exclude'] = []
test_set = filter(lambda k: k not in opt['exclude'], test_set)
if 'filter' in opt and opt['filter']:
exclude_list = getattr(case_filter, '%s_list' % opt['filter'], [])
test_set = filter(lambda k: k not in exclude_list, test_set)
##有all参数时重新排序,保证运行case的顺序
if 'all' in opt and opt['all'] == 'all':
test_set_suite = filter(lambda k: '.' in k, test_set)
test_set_suite = sorted(test_set_suite)
test_set_t = filter(lambda k: k not in test_set_suite, test_set)
test_set = sorted(test_set_t)
test_set.extend(test_set_suite)
if 'succ' in opt and opt['succ'] == 'succ':
test_set = filter(lambda k: k not in succtest.succ_filter, test_set)
else:
test_set = sorted(test_set)
if 'slices' in opt and opt['slices'] and 'slice_idx' in opt and opt['slice_idx']:
slices = int(opt['slices'])
slice_idx = int(opt['slice_idx'])
test_set = test_set[slice_idx::slices]
opt['test_set'] = test_set
return plugin_context.return_true(test_set=test_set)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import re
import os
from ssh import LocalClient
def parse_size(size):
_bytes = 0
if not isinstance(size, str) or size.isdigit():
_bytes = int(size)
else:
units = {"B": 1, "K": 1<<10, "M": 1<<20, "G": 1<<30, "T": 1<<40}
match = re.match(r'([1-9][0-9]*)([B,K,M,G,T])', size)
_bytes = int(match.group(1)) * units[match.group(2)]
return _bytes
def get_memory_limit(cursor, client):
try:
cursor.execute('show parameters where name = \'memory_limit\'')
memory_limit = cursor.fetchone()
if memory_limit and 'value' in memory_limit and memory_limit['value']:
return parse_size(memory_limit['value'])
ret = client.execute_command('free -b')
if ret:
ret = client.execute_command("cat /proc/meminfo | grep 'MemTotal:' | awk -F' ' '{print $2}'")
total_memory = int(ret.stdout) * 1024
cursor.execute('show parameters where name = \'memory_limit_percentage\'')
memory_limit_percentage = cursor.fetchone()
if memory_limit_percentage and 'value' in memory_limit_percentage and memory_limit_percentage['value']:
total_memory = total_memory * memory_limit_percentage['value'] / 100
return total_memory
except:
pass
return 0
def get_root_server(cursor):
try:
cursor.execute('select * from oceanbase.__all_server where status = \'active\' and with_rootserver=1')
return cursor.fetchone()
except:
pass
return None
def init(plugin_context, env, *args, **kwargs):
def exec_sql(cmd):
ret = re.match('(.*\.sql)(?:\|([^\|]*))?(?:\|([^\|]*))?', cmd)
if not ret:
stdio.error('parse cmd failed: %s' % cmd)
return False
cmd = ret.groups()
sql_file_path1 = os.path.join(init_sql_dir, cmd[0])
sql_file_path2 = os.path.join(plugin_init_sql_dir, cmd[0])
if os.path.isfile(sql_file_path1):
sql_file_path = sql_file_path1
elif os.path.isfile(sql_file_path2):
sql_file_path = sql_file_path2
else:
stdio.error('%s not found in [%s, %s]' % (cmd[0], init_sql_dir, plugin_init_sql_dir))
return False
exec_sql_cmd = exec_sql_temp % (cmd[1] if cmd[1] else 'root', cmd[2] if cmd[2] else 'oceanbase', sql_file_path)
ret = LocalClient.execute_command(exec_sql_cmd, stdio=stdio)
if ret:
return True
stdio.error('Failed to Excute %s: %s' % (sql_file_path, ret.stderr.strip()))
return False
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
cursor = env['cursor']
obclient_bin = env['obclient_bin']
mysqltest_bin = env['mysqltest_bin']
server = env['test_server']
root_server = get_root_server(cursor)
if root_server:
port = root_server['inner_port']
host = root_server['svr_ip']
else:
stdio.error('Failed to get root server.')
return plugin_context.return_false()
init_sql_dir = env['init_sql_dir']
plugin_init_sql_dir = os.path.join(os.path.split(__file__)[0], 'init_sql')
exec_sql_temp = obclient_bin + ' --prompt "OceanBase(\\u@\d)>" -h ' + host + ' -P ' + str(port) + ' -u%s -D%s -c < %s'
if 'init_sql_files' in env and env['init_sql_files']:
init_sql = env['init_sql_files'].split(',')
else:
exec_init = 'init.sql'
exec_mini_init = 'init_mini.sql'
exec_init_user = 'init_user.sql|root@mysql|test'
client = plugin_context.clients[server]
memory_limit = get_memory_limit(cursor, client)
is_mini = memory_limit and parse_size(memory_limit) < (16<<30)
if is_mini:
init_sql = [exec_mini_init, exec_init_user]
else:
init_sql = [exec_init, exec_init_user]
stdio.start_loading('Execute initialize sql')
for sql in init_sql:
if not exec_sql(sql):
stdio.stop_loading('fail')
return plugin_context.return_false()
stdio.stop_loading('succeed')
return plugin_context.return_true()
system sleep 5;
alter system set balancer_idle_time = '10s';
create user 'admin' IDENTIFIED BY 'admin';
use oceanbase;
create database if not exists test;
use test;
grant all on *.* to 'admin' WITH GRANT OPTION;
alter system set merger_warm_up_duration_time = '0s';
alter system set zone_merge_concurrency = 2;
alter system set merger_check_interval = '10s';
alter system set enable_syslog_wf=false;
alter system set _enable_split_partition = true;
#FIXME: schema拆分模式建租户耗时增加,这里先加大语句超时时间先绕过
set @@session.ob_query_timeout = 40000000;
create resource unit box1 max_cpu 2, max_memory 4073741824, max_iops 128, max_disk_size '5G', max_session_num 64, MIN_CPU=1, MIN_MEMORY=4073741824, MIN_IOPS=128;
create resource pool pool2 unit = 'box1', unit_num = 1;
create tenant mysql replica_num = 1, resource_pool_list=('pool2') set ob_tcp_invited_nodes='%', ob_compatibility_mode='mysql', parallel_max_servers=10, parallel_servers_target=10, secure_file_priv = "";
set @@session.ob_query_timeout = 10000000;
system sleep 5;
alter tenant sys set variables recyclebin = 'on';
alter tenant sys set variables ob_enable_truncate_flashback = 'on';
alter tenant mysql set variables ob_tcp_invited_nodes='%';
alter tenant mysql set variables recyclebin = 'on';
alter tenant mysql set variables ob_enable_truncate_flashback = 'on';
select count(*) from oceanbase.__all_server group by zone limit 1 into @num;
set @sql_text = concat('alter resource pool pool2', ' unit_num = ', @num);
prepare stmt from @sql_text;
execute stmt;
deallocate prepare stmt;
select primary_zone from oceanbase.__all_tenant where tenant_id = 1 into @zone_name;
alter tenant mysql primary_zone = @zone_name;
system sleep 5;
alter system set balancer_idle_time = '10s';
create user 'admin' IDENTIFIED BY 'admin';
use oceanbase;
create database if not exists test;
use test;
grant all on *.* to 'admin' WITH GRANT OPTION;
alter system set merger_warm_up_duration_time = '0s';
alter system set zone_merge_concurrency = 2;
alter system set merger_check_interval = '10s';
alter system set enable_syslog_wf=false;
alter system set _enable_split_partition = true;
#FIXME: schema拆分模式建租户耗时增加,这里先加大语句超时时间先绕过
set @@session.ob_query_timeout = 40000000;
create resource unit box1 max_cpu 2, max_memory 805306368, max_iops 128, max_disk_size '5G', max_session_num 64, MIN_CPU=1, MIN_MEMORY=805306368, MIN_IOPS=128;
create resource pool pool2 unit = 'box1', unit_num = 1;
create tenant mysql replica_num = 1, resource_pool_list=('pool2') set ob_tcp_invited_nodes='%', ob_compatibility_mode='mysql', parallel_max_servers=10, parallel_servers_target=10, ob_sql_work_area_percentage=20, secure_file_priv = "";
alter resource unit sys_unit_config min_memory=1073741824,max_memory=1073741824;
set @@session.ob_query_timeout = 10000000;
system sleep 5;
alter tenant sys set variables recyclebin = 'on';
alter tenant sys set variables ob_enable_truncate_flashback = 'on';
alter tenant mysql set variables ob_tcp_invited_nodes='%';
alter tenant mysql set variables recyclebin = 'on';
alter tenant mysql set variables ob_enable_truncate_flashback = 'on';
select count(*) from oceanbase.__all_server group by zone limit 1 into @num;
set @sql_text = concat('alter resource pool pool2', ' unit_num = ', @num);
prepare stmt from @sql_text;
execute stmt;
deallocate prepare stmt;
select primary_zone from oceanbase.__all_tenant where tenant_id = 1 into @zone_name;
alter tenant mysql primary_zone = @zone_name;
use oceanbase;
create user 'admin' IDENTIFIED BY 'admin';
grant all on *.* to 'admin' WITH GRANT OPTION;
create database obproxy;
alter system set _enable_split_partition = true;
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
### use to filter cases
### regression name: c,cp,j,jp,o,op,slave,proxy
#partition_range_test=["partition.ob_partition_hash_range", "partition.ndb_partition_range","partition.ob_partition_range", "partition.ob_partition_range_expr", "partition.ob_partition_trx", "partition.ob_partition_consistency", "partition.tc_partition_change_from_range_to_hash_key", "partition.partition_max_parts_range_innodb", "partition.ob_partition_hash_range", "partition.ob_partition_ddl", "partition.partition_max_sub_parts_range_innodb"]
partition_range_test=["partition.ob_partition_consistency", "partition.ob_partition_ddl", "partition.ob_partition_hash_range", "partition.partition_max_parts_range_innodb", "partition.ob_partition_range_expr", "partition.partition_max_sub_parts_range_innodb",
"partition.tc_partition_change_from_range_to_hash_key", "partition.ob_partition_trx"]
#ps_test=[]
ps_test=["ps.index_28_trx_concu_multi_dml_ps","ps.jdbc_ps_insert","ps.ps_affect_rows","ps.ps_datatype","ps.ps_lose_replace","ps.ps_max_concurrent","ps.sfu_norow_ps","ps.index_29_trx_concu_compound_ps","ps.jdbc_replace_ps","ps.ps_basic","ps.ps_execute_repeat","ps.ps_lose_replace_tr","ps.ps_muticonn_execute_repeat","ps.sql_audit_c_ps","ps.index_31_unique_ps","ps.jdbc_replace_ps_trx","ps.ps_cache","ps.ps_lose_cur_time","ps.ps_lose_select","ps.ps_muticonn_stress","ps.trx_expire_index_ps_1","ps.bigvarchar_ps","ps.index_45_ps_expire_1","ps.join_ps_bug","ps.ps_complex","ps.ps_lose_delete","ps.ps_lose_select_tr","ps.ps_normal","ps.trx_expire_index_ps_2","ps.decode_ps","ps.index_45_ps_expire_2","ps.ps_1","ps.ps_complex_delete","ps.ps_lose_delete_tr","ps.ps_lose_update","ps.ps_order_by","ps.update_delete_limit_ps","ps.decode_ps_return_type","ps.index_46_drop_ps","ps.ps_2","ps.ps_complex_insert","ps.ps_lose_insert","ps.ps_lose_update_tr","ps.ps_outline","ps.decode_ps_tbl","ps.jdbc_ps_all_statement","ps.ps_3","ps.ps_complex_replace","ps.ps_lose_insert_tr","ps.ps_lose_when","ps.ps_stress","ps.index_26_trx_concu_dml_samerow_ps","ps.jdbc_ps_complex","ps.ps_abs","ps.ps_complex_update","ps.ps_lose_muticonn","ps.ps_lose_when2","ps.ps_varchar"]
obsolete_px_function_list=['multi_partition_pq']
spm=['spm.spm_expr','spm.spm_with_acs_new_plan_not_ok']
merge_into=['merge_into.merge_insert','merge_into.merge_into_normal', 'merge_into.merge_subquery', 'merge_into.merge_update']
# TODO bin.lb:
# Temporary failure, remove this after updatable view commit.
updatable_view = ['view.is_views', 'create_frommysql' ]
excludes=['pl.sp-error-big_mysql','bug217660_xiaochu', 'collect','trx_collect', 'tenant.resource_pool_new',
'ps_cache',"parawhen_manytimes_outwaitlock","parawhen_manytimes_waitlock","ps_lose_when2","ps_lose_when","update_rowkey_when","when1","when2","when3","when_clause","when_idx_range","when_idxs_range","when_nest1","when_nonrowkey_range","when_nonrowkey_range_withidx","when_parallel1","when_parallel2","when_parallel3","when_select_for_update_wait","when_trx2","jdbc_ps_complex","ps_complex_delete","ps_complex_insert","ps_complex_replace","ps_complex","ps_complex_update","trx_complex","expire_bug5328455","expire_index","expire_index_trxdel","expire","expire_trx2","expire_trx_drop_tbl","expire_trx_modifydata","expire_trx_nop","expire_trx_replace2","expire_trx_replace","expire_trx","expire_trx_update","expire_unique_index_trxdel","index_45_ps_expire_1","index_45_ps_expire_2","trx_expire_alter_drop_add_col_idx","trx_expire_alter_drop_add_col","trx_expire_idx_unique_merge_step","trx_expire_index_ps_1","trx_expire_index_ps_2","trx_expire_more_oper","trx_expire_step_merge_num","zaddlmajor_gt64times_expire_trx","decode_col_convert","decode_extra_jdbc","aproject_account","decode_ps_return_type","decode_ps_tbl","decode_ps","decode_return_type","func_decode","bigvarchar_prejoin","prejoin_tabletype","prejoin","prejoin_update_basic","compound_bug","idx_unique_compound","index_29_trx_concu_compound_ps","ups_quick_compound_partital_rollback1","ups_quick_compound_partital_rollback","update_delete_compund","update_delete_compund_idx","update_delete_compund_uni","outline.outline_no_hint_check_hit","inner_table.all_virtual_sql_plan_monitor", "plan_cache.plan_cache_late_compile", "plan_cache.plan_cache_retry", "ocp", "update.update_ignore_multi_row", "update.update_ignore_multi_stmt", "update.update_ignore_one_row", "materialized_view.mv_basic3", "like_goes_index_dilang", "test_wisconsin","jit.expr_jit_basic_mysql"] + partition_range_test + obsolete_px_function_list + spm + merge_into + updatable_view
c_list=['ps_affect_rows',
'affect_rows',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'information_schema.information_schema_db_nt',
'jdbc_ps_complex',
'jdbc_trx_with_merge',
'jdbc_parallel_trx',
'jdbc_ps_insert',
'jdbc_ps_all_statement',
'jdbc_replace_ps',
'jdbc_replace_ps_trx',
'jdbc_replace_parallel_trx',
'hex_ip_java',
'decode_ps_return_type',
'decode_return_type',
'decode_ps_return_type',
'decode_return_type',
'ps_lose_delete','ps_lose_delete_tr','ps_lose_insert','ps_lose_insert_tr','ps_lose_muticonn','ps_lose_replace','ps_lose_replace_tr','ps_lose_select','ps_lose_select_tr','ps_lose_update','ps_lose_update_tr','ps_muticonn_stress','ps_lose_when','ps_lose_when2',
'bind_variable_stress_greater_then_65535','bind_variable_stress_65535','update_rowkey_basic','update_rowkey_bug5003370','decode_extra_jdbc','func_sign','sql_audit_c_ps',"bigvarchar_ps","deadlock_causedby_ups_switch","decode_ps_return_type","decode_ps_tbl","decode_ps","index_26_trx_concu_dml_samerow_ps","index_28_trx_concu_multi_dml_ps","index_29_trx_concu_compound_ps","index_31_unique_ps","index_45_ps_expire_1","index_45_ps_expire_2","index_46_drop_ps","jdbc_ps_all_statement","jdbc_ps_complex","jdbc_ps_insert","jdbc_replace_ps","jdbc_replace_ps_trx","join_ps_bug","master_ups_lost_causedby_switch_twice","ps_1","ps_2","ps_3","ps_affect_rows","ps_basic","ps_cache","ps_complex_delete","ps_complex_insert","ps_complex_replace","ps_complex","ps_complex_update","ps_execute_repeat","ps_lose_cur_time","ps_lose_delete","ps_lose_delete_tr","ps_lose_insert","ps_lose_insert_tr","ps_lose_muticonn","ps_lose_replace","ps_lose_replace_tr","ps_lose_select","ps_lose_select_tr","ps_lose_update","ps_lose_update_tr","ps_lose_when2","ps_lose_when","ps_muticonn_execute_repeat","ps_muticonn_stress","ps_order_by","ps_stress","ps_varchar","sfu_norow_ps","sql_audit_c_ps","trx_expire_index_ps_1","trx_expire_index_ps_2","update_delete_limit_ps","upsmutiget","ups_quick_compound_partital_rollback1","ups_quick_compound_partital_rollback","vector_nps","bug_prepare_core","index_30_trx_prepare_in_trx","join.nested_loop_join_prepare_joinon","join.nested_loop_join_prepare_joinon_where","join.nested_loop_join_prepare","func_group_3","func_like_index","idx_const_basic_one","idx_const_basic_one_time","idx_const_basic_one_varchar","idx_unique_dml_varchar","idx_unique_many_idx","limitnegative","join.nested_loop_join","join.nested_loop_join_idx","join.nested_loop_join_idx_joinon","join.nested_loop_join_idx_joinon_where","join.nested_loop_join_idx_usenl","join.nested_loop_join_idx_usenl_joinon","join.nested_loop_join_idx_usenl_joinon_where","join.nested_loop_join_joinon","join.nested_loop_join_joinon_where","nop","nop_index","nop_index_default","nop_index_multi_rowkey","nop_multi_rowkey","update_delete_orderby_limit","update_delete_orderby_limit_idx","update_delete_orderby_limit_unique","update_hot","type_date.update_timestamp_affect_rows","bug_5050383","plan_expression_slave",
'recyclebin.recyclebin_sync_ddl', 'recyclebin.recyclebin_information_schema','complex_obgene_sql_1',
'partition_part_id',
'insert.insert_ignore_multi_stmt', 'insert.insert_ignore_one_row', 'insert.insert_ignore_one_stmt_multi_row', 'insert.insert_select_ignore', 'outline.outline_use',
'part_mg.alter_tablegroup_timeout1'
] + excludes + ps_test
cp_list=['ps_affect_rows',
'affect_rows',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'jdbc_ps_complex',
'jdbc_trx_with_merge',
'jdbc_parallel_trx',
'jdbc_ps_insert',
'jdbc_ps_all_statement',
'hex_ip_java',
'special_hook',
'type_date.timestamp_2m',
'set',
'kill',
'killquery',
'decode_ps_return_type',
'decode_return_type',
'jdbc_replace_ps',
'jdbc_replace_ps_trx',
'jdbc_replace_parallel_trx',
'insert_fail',
'sql_audit',
'decode_ps_return_type',
'decode_return_type',
'ps_lose_delete','ps_lose_delete_tr','ps_lose_insert','ps_lose_insert_tr','ps_lose_muticonn','ps_lose_replace','ps_lose_replace_tr','ps_lose_select','ps_lose_select_tr','ps_lose_update','ps_lose_update_tr','ps_muticonn_stress','ps_lose_when','ps_lose_when2',
'ps_lose_delete','ps_lose_delete_tr','ps_lose_insert','ps_lose_insert_tr','ps_lose_muticonn','ps_lose_replace','ps_lose_replace_tr','ps_lose_select','ps_lose_select_tr','ps_lose_update','ps_lose_update_tr','ps_muticonn_stress','ps_lose_when','ps_lose_when2',
'bind_variable_stress_65535','bind_variable_stress_greater_then_65535','vector_nps',
'join_bigid_bug206703','update_rowkey_basic','update_rowkey_bug5003370',
'zcreate10000table',
'zcreateindex1000',
'decode_extra_jdbc','func_sign','zaddlmajor_gt64times_expire_trx','zaddlmajor_gt64times', "plan_expression_slave",'optimizer.optimizer_bug_misc','sql_alloc_count','tenant.monitor', 'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast', 'recyclebin.recyclebin_sync_ddl', 'recyclebin.recyclebin_information_schema',
] + excludes
j_list=['ps_1',
'ps_3',
'select_error',
'autocommit',
'tc_multicolumn_different',
'hex_ip',
'empty_input',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'delete_bug206717',
'delete_bug206717_yzf',
'deprecated_features',
'information_schema.information_schema2',
'information_schema.information_schema_db',
'update.update_ignore_multi_stmt',
'insert.insert_ignore_multi_stmt',
'synchronization',
'set',
'create_frommysql',
'ix_drop_error',
'ix_drop',
'number.ix_index_decimals',
'ix_index_non_string',
'ix_index_string_length',
'ix_index_string',
'number.ix_unique_decimals',
'ix_unique_non_string',
'ix_unique_string_length',
'ix_unique_string',
'ix_using_order',
'serialize_6k_bug',
'kill',
'killquery',
'trx_timeout',
'create_user',
'sql_audit',
'create2',
'scan_2M_size',
'user_privilege',
'user_pwd',
'revoke',
'session_timeout',
'privileges',
'jdbc_ps_all_statement',
'query_method',
'query_timeout',
'sfu',
'sfu2',
'when_parallel1',
'when_parallel3',
'update_delete_many_data',
'when_parallel2',
'when_trx2',
'join_bigid_bug206703',
'a_trade_schema',
'create10000table',
'nop',
'nop_multi_rowkey',
'many_number_pk',
'many_number_pk_decimal',
'many_number_pk_large_than_58',
'many_number_pk_timestamp',
'many_number_pk_varchar',
'nop_index',
'nop_index_multi_rowkey',
'nop_index_default',
'java',
'expire_trx',
'insert_fail',
'ps_muticonn_stress',
'decode_ps',
'expire_trx2',
'trans_monitor',
'concurrent_tablet_insert_delete',
'type_date.timestamp_2m',
'decode_extra',
'zaddlmajor_gt64times',
'zaddlmajor_gt64times_expire_trx',
'sql_audit_c_ps',
'transformer.impl',
'information_schema.information_schema2',
'information_schema.information_schema_db',
'information_schema.information_schema_chmod',
'information_schema.information_schema_inno',
'plan_cache.plan_cache_update',
'zcreate10000table',
'zcreateindex1000',
'create_not_windows',
'partition.ob_partition_max_num',
'partition.ob_partition_max_num_pk',
'partition.partition_auto_increment_innodb',
'view.innodb_func_view',
'view.innodb_views',
'information_schema.information_schema2_nt',
'information_schema.information_schema_db_nt',
'group_min_max',
'show_check',
'lowercase_table4',
'parser_precedence_1',
'parser',
'mysql_comments',
'dml_enable_info',
'replace_and_insert_on_dup.replace_into_affected_rows',
'replace_and_insert_on_dup.replace_with_auto_increment',
'replace_and_insert_on_dup.replace_with_different_data_type',
'replace_and_insert_on_dup.insert_on_duplicate_key_muti_unique_index',
'replace_and_insert_on_dup.insert_on_duplicate_key_only_primary_key',
'replace_and_insert_on_dup.replace_read_laster',
'meta_info.meta_const',
"plan_expression_slave",
'insert_rows_sum_of_2M_size','optimizer.optimizer_bug_misc','sql_alloc_count',
'outline.outline_basic',
'outline.outline_concurrent',
'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast',
'outline.create_charge_outline',
'meta_cast', 'recyclebin.recyclebin_sync_ddl', 'recyclebin.recyclebin_information_schema',
] + excludes
jp_list=['meta_info.meta_build_in_func_test','meta_info.meta_func_ceil','meta_info.meta_func_floor',
'meta_info.meta_func_gconcat','meta_info.meta_func_group_1','meta_info.meta_func_length',
'meta_info.meta_test_func_return_type','meta_info.meta_timefuncnull','meta_info.meta_const',
'meta_info.meta_type','meta_info.meta_func','meta_cast',
'dml_enable_info','insert.insert2','executor_scan_2_mode',
'dml_enable_info','insert.insert2','executor_scan_2_mode','expr_precision_scale_length',
'update.update_ignore_multi_stmt',
'insert.insert_ignore_multi_stmt',
'replace_and_insert_on_dup.replace_into_affected_rows',
'replace_and_insert_on_dup.replace_with_auto_increment',
'replace_and_insert_on_dup.replace_with_different_data_type',
'replace_and_insert_on_dup.insert_on_duplicate_key_muti_unique_index',
'replace_and_insert_on_dup.insert_on_duplicate_key_only_primary_key',
'replace_and_insert_on_dup.replace_read_laster',
'ps_1',
'create_not_windows',
'crash_manytables_string',
'tenant.resource_pool_new',
'partition.ob_partition_max_num',
'partition.ob_partition_max_num_pk',
'partition.partition_auto_increment_innodb',
'view.innodb_func_view',
'view.innodb_views',
'information_schema.information_schema2_nt',
'information_schema.information_schema_db_nt',
'ps_3',
'select_error',
'crash_manytables_number',
'alter.ta_drop_string_index',
'autocommit',
'crash_manycolumns_number',
'crash_manycolumns_string',
'empty_input',
'tc_multicolumn_different',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'delete_bug206717',
'delete_bug206717_yzf',
'many_columns',
'serialize_6k_bug',
'kill',
'create_frommysql',
'hex_ip',
'killquery',
'trx_timeout',
'create_user',
'deprecated_features',
'information_schema.information_schema2',
'information_schema.information_schema_db',
'synchronization',
'group_min_max',
'show_check',
'ix_drop_error',
'ix_drop',
'number.ix_index_decimals',
'ix_index_non_string',
'ix_index_string_length',
'ix_index_string',
'number.ix_unique_decimals',
'ix_unique_non_string',
'ix_unique_string_length',
'ix_unique_string',
'ix_using_order',
'create2',
'scan_2M_size',
'user_privilege',
'user_pwd',
'revoke',
'session_timeout',
'many_number_pk',
'many_number_pk_decimal',
'many_number_pk_large_than_58',
'many_number_pk_timestamp',
'many_number_pk_varchar',
'type_date.timestamp_2m',
'decode_ps',
'privileges',
'query_method',
'query_timeout',
'expire_trx',
'sql_audit',
'sql_audit_c_ps',
'sfu',
'sfu2',
'when_parallel1',
'when_parallel3',
'when_parallel2',
'when_trx2',
'vector_nps',
'a_trade_schema',
'parallel_create_table',
'create10000table',
'nop',
'nop_multi_rowkey',
'nop_index',
'nop_index_multi_rowkey',
'nop_index_default',
'java',
'set',
'update_delete_many_data',
'join_bigid_bug206703',
'ps_muticonn_stress',
'insert_fail',
'trans_monitor',
'concurrent_tablet_insert_delete',
'zcreate10000table',
'zcreateindex1000',
'decode_extra',
'expire_trx2',
'expire_trx_nop',
'zaddlmajor_gt64times_expire_trx',
'plan_cache.plan_cache_update',
'zcreate10000table',
'zcreateindex1000',
'innodb.innodb_misc1',
'innodb.innodb_mysql',
'innodb.innodb',
'execution_constants',
'inner_table.inner_table_overall',
'greedy_search',
'plan_base_line_for_schema',
'comment_stmt',
'parser_precedence_1',
'parser',
'mysql_comments',
'lowercase_table4',
'func_in_none',
'derived',
'trx.init_innodb',
'trx.rr_id_3',
'trx.rr_sc_sum_total',
'trx.rr_u_4',
'trx.deadlock',
'consistent_snapshot',
"plan_expression_slave",
"show_create",
'insert_rows_sum_of_2M_size',
'optimizer.optimizer_bug_misc',
'sql_alloc_count',
'outline.outline_basic',
'outline.outline_concurrent',
'union1',
'outline.create_charge_outline', 'recyclebin.recyclebin_sync_ddl', 'recyclebin.recyclebin_information_schema'
] + excludes
o_list=['ps_affect_rows',
'affect_rows',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'jdbc_ps_complex',
'jdbc_trx_with_merge',
'jdbc_parallel_trx',
'jdbc_ps_insert',
'create_use',
'revoke',
'user_privilege',
'rename_user',
'kill',
'killquery',
'set',
'jdbc_replace_ps',
'jdbc_replace_ps_trx',
'jdbc_replace_parallel_trx',
'create_user',
'trx_timeout',
'session_timeout',
'ps_lose_delete','ps_lose_delete_tr','ps_lose_insert','ps_lose_insert_tr','ps_lose_muticonn','ps_lose_replace','ps_lose_replace_tr','ps_lose_select','ps_lose_select_tr','ps_lose_update','ps_lose_update_tr','ps_muticonn_stress','jdbc_ps_all_statement','ps_lose_when','ps_lose_when2',
'query_method',
'lowercase_table4',
"plan_expression_slave",
'optimizer.optimizer_bug_misc','sql_alloc_count','tenant.monitor',
'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast'
] + excludes
op_list=['ps_affect_rows',
'affect_rows',
'master_ups_lost_causedby_switch_twice',
'deadlock_causedby_ups_switch',
'jdbc_ps_complex',
'jdbc_trx_with_merge',
'jdbc_parallel_trx',
'jdbc_ps_insert',
'build_in_func_test',
'create_user',
'revoke',
'user_privilege',
'rename_user',
'kill',
'killquery',
'set',
'jdbc_replace_parallel_trx',
'create_user',
'jdbc_replace_ps',
'jdbc_replace_ps_trx',
'trx_timeout',
'session_timeout',
'query_method',
'ps_lose_delete','ps_lose_delete_tr','ps_lose_insert','ps_lose_insert_tr','ps_lose_muticonn','ps_lose_replace','ps_lose_replace_tr','ps_lose_select','ps_lose_select_tr','ps_lose_update','ps_lose_update_tr','ps_muticonn_stress','jdbc_ps_all_statement','ps_lose_when','ps_lose_when2','vector_nps',
"plan_expression_slave",
'optimizer.optimizer_bug_misc','sql_alloc_count', 'tenant.monitor',
'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast'
] + excludes
remote_list=['plan_base_line',
'plan_base_line_for_schema',
'zcreate10000table',
'zcreateindex1000',
'kill',
'innodb_icp',
'optimizer.optimizer_bug_misc',
'only_full_group_by_sql_mode',
'sql_alloc_count',
'project_pruning',
'index_orderby_select_unique_idx',
'aggregate_rewrite',
'late_materialization',
'kill_transaction',
'partition.partition_range',
'cbo',
'partition.ob_partition_location',
'select_distinct',
'delete_alias',
'generated_column_basic',
'generated_column',
'time_zone.time_zone_usage',
'join.anti_semi_join',
] + excludes
slave_list=['scan_2M_size','update_delete_many_data','insert_rows_sum_of_2M_size','information_schema',
'join_bigid_bug206703','tenant2', 'tenant3','simple_ddl','join.join_outer_new','show',
'plan_cache.plan_cache_update','information_schema.information_schema_desc',
'information_schema.information_schema_select','group_min_max','aggregate_rewrite','order_by',
'show_check', 'innodb.innodb', 'innodb.innodb_mysql', 'negation_elimination','subselect_innodb',
'limit_update_delete',
'greedy_search','greedy_optimizer','case','write_timeout','dml_update_multi_partition',
'write_timeout', 'innodb.innodb_pk_extension_on',
'information_schema.information_schema2','information_schema.information_schema-big','tenant',
'derived', 'func_in_none', 'group_by1','union1',
'group_by_1', 'eq_range_idx_stat','join.join_blk_nested', 'partition.partition_locking','optimizer.optimizer_bug_misc', 'only_full_group_by_sql_mode',
'inner_table.tenant_virtual_outline', 'tenant.create_many_tiny','inner_table.tenant_virtual_concurrent_limit_sql',
'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast',
'inner_table.all_virtual_sql_plan_monitor', 'test_partition_id', 'bit_type.bit_column_dml', 'recyclebin.recyclebin_sync_ddl', 'recyclebin.recyclebin_information_schema',
'transformer.or_expansion','acs.acs_basic','plan_cache_elimination_for_buffer_table', 'found_rows_show_stmt',
'partition_part_id',
'spm.spm_new_plan_not_ok', 'spm.spm_new_plan_ok', 'so_udf', 'spm.spm_repeated_add_baseline',
'insert.insert_ignore_multi_stmt', 'insert.insert_ignore_one_row', 'insert.insert_ignore_one_stmt_multi_row', 'insert.insert_select_ignore', 'plan_cache.plan_cache_same_name',
'outline.outline_use', "spm.spm_banned_plan", "spm.spm_fixed_baseline",
'part_mg.alter_tablegroup_timeout1', 'replace_multi_partition', 'spm.spm_param_info', 'plan_cache.plan_cache_insert_uncertain_op'
]+remote_list+c_list+excludes
proxy_list=['inner_table.all_virtual_sql_plan_monitor', 'scan_2M_size','join_bigid_bug206703','join.join_outer_new', "plan_expression_slave","eq_range_idx_stat","greedy_search","synchronization",'join.join_blk_nested','greedy_optimizer', 'derived','optimizer.optimizer_bug_misc', 'plan_cache.plan_cache_update', 'plan_cache.plan_cache_memory', 'plan_cache.plan_cache_outline', 'only_full_group_by_sql_mode','sql_alloc_count', 'sys_vars.init_connect_var', 'outline.outline_basic', 'query_rowkey_range', 'query_eliminate_sort', 'query_with_precast', 'sys_vars.init_connect_var'
]+remote_list+c_list+excludes
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
psmall_source = {
"g.default": 60,
"g.buffer": 120,
"inner_table.inner_table_overall": 120,
"bulk_insert": 320,
"global_index.global_index_lookup_1": 90,
"global_index.global_index_lookup_2": 90,
"global_index.global_index_lookup_3": 90,
"global_index.global_index_lookup_4": 90,
"global_index.global_index_lookup_5": 90,
"global_index.global_index_lookup_6": 150,
"a_trade_notify": 280,
"a_trade_quick": 150,
"sfu_norow_alias": 240,
"global_index.global_index_basic": 480,
}
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
psmall_test=[
###=== sql engine 3.0 test
'static_engine.table_insert',
'static_engine.hash_set',
'static_engine.merge_set',
'static_engine.hash_distinct',
'static_engine.nested_loop_join',
'static_engine.merge_join',
'static_engine.explicit_cast',
'static_engine.expr_abs',
'static_engine.expr_and_or',
'static_engine.expr_ascii',
'static_engine.expr_bool',
'static_engine.expr_collation',
'static_engine.expr_concat',
'static_engine.expr_conv',
'static_engine.expr_datediff',
'static_engine.expr_date',
'static_engine.expr_des_hex_str',
'static_engine.expr_dump',
'static_engine.expr_elt',
'static_engine.expr_empty_arg',
'static_engine.expr_estimate_ndv',
'static_engine.expr_field',
'static_engine.expr_find_in_set',
'static_engine.expr_from_unixtime',
'static_engine.expr_get_sys_var',
'static_engine.expr_insert',
'static_engine.expr_is_serving_tenant',
'static_engine.expr_is',
'static_engine.expr_left',
'static_engine.expr_length',
'static_engine.expr_lnnvl',
'static_engine.expr_location',
'static_engine.expr_lower_upper',
'static_engine.expr_math',
'static_engine.expr_md5',
'static_engine.expr_mid',
'static_engine.expr_neg',
'static_engine.expr_not',
'static_engine.expr_nullif_ifnull',
'static_engine.expr_nvl',
'static_engine.expr_pad',
'static_engine.expr_part_hash',
'static_engine.expr_part_key',
'static_engine.expr_regexp_func',
'static_engine.expr_regexp',
'static_engine.expr_repeat',
'static_engine.expr_replace',
'static_engine.expr_space',
'static_engine.expr_str',
'static_engine.expr_substring_index',
'static_engine.expr_substr',
'static_engine.expr_sys_privilege_check',
'static_engine.expr_time_diff',
'static_engine.expr_timestampadd',
'static_engine.expr_todays',
'static_engine.expr_trim',
'static_engine.expr_trunc',
'static_engine.expr_unhex',
'static_engine.expr_xor',
'static_engine.hash_distinct',
'static_engine.hash_set',
'static_engine.material',
'static_engine.merge_join',
'static_engine.merge_set',
'static_engine.monitoring_dump',
'static_engine.nested_loop_join',
'static_engine.partition_split',
'static_engine.px_basic',
'static_engine.static_engine_case',
'static_engine.static_engine_cmp_null',
'static_engine.static_engine_hash',
'static_engine.subplan_filter',
'static_engine.table_insert',
'static_engine.table_scan',
'static_engine.subplan_scan',
'static_engine.expr_nextval',
'static_engine.expr_unix_timestamp',
'static_engine.expr_char_length',
'static_engine.expr_assign',
'static_engine.expr_get_user_var',
'static_engine.expr_sign',
###=== sql engine 3.0 test end
#####'partition_split',
'execution_partition_pruning_mysql',
'part_mg.basic_partition_mg1',
'part_mg.basic_partition_mg_pg3',
'bulk_insert',
'sfu_norow_alias',
'global_index.global_index_lookup_1',
'global_index.global_index_lookup_2',
'global_index.global_index_lookup_3',
'global_index.global_index_lookup_4',
'global_index.global_index_lookup_5',
'empty_input',
'selectotherdb',
'special_hook',
'substring_index',
'special_stmt',
'bug210026',
####'default_system_variable',
####'java',
'escape',
'largetimeout',
'count',
'expr.expr_position',
'type_date.test_select_usec_to_time',
'aggr_bug200109',
'driver5114_bug',
'chinese',
'distinct',
'type_date.timestamp2',
'ms_lose_rollback',
'column_alias',
'h',
'number.bug229955_positive_int',
'bug233498_rename_table_jianming',
'bug200747',
'alias3',
'expr.expr_ceil',
'expr.func_length',
'replace_null',
'func_group_5',
'explain',
'bool',
'outer_join_where_is_null',
'groupby.group_by_4',
'datatype_java',
'bug5910265_update_orderby_limit_dilang',
'join_basic',
'type_varchar_2',
'sq_from_2',
'compare',
'func_group_6',
'join.nested_loop_join_right_null_joinon_where',
'type_date.add_timestamp_column',
'trx_4',
'safe_null_test',
'null2',
'join.nested_loop_join_right_null_joinon',
'limit',
'alias2',
'join_many_table_single_field',
'non_reserved_keyword',
'update_column_use_other',
'empty_table',
'expr.expr_nseq',
'expr.expr_floor',
'groupby.group_by_2',
'trx_5',
'join_many_table',
'type_date.expr_date_add_sub',
'expr.func_equal',
'truncate_table',
####'select_bug',
'duplicate_key',
'type_date.timefuncnull',
'get',
'join.nested_loop_join_right_null',
'trans_ac',
'delete.deleteV2',
'rowkey_is_null',
####'subquery',
'rowkey_is_int',
####'tsc',
'sq_from',
'rename_table2',
'join_star',
####'create_view',
'pk_num_boundary',
'type_date.daylight_saving_time',
'intersect',
'join_null',
'identifier_name_length',
'delete.delete',
'delete.delete_from_mysql',
'type_date.datetime_java',
'expr.collation_expr',
'jp_length_utf8',
'rowkey_update_datatype_convert',
'join_equivalent_transfer',
'update_range',
'ddlrollback',
'bench_count_distinct',
'expr.func_regexp',
'join_using1',
'type_date.updaterowkeymoditime',
'except',
'func_group_1',
'rowkey_is_char',
'type_date.type_create_time',
'delete.delete_range',
'groupby.group_by_basic',
'range',
'select_basic',
'type_date.type_modify_time',
'fin',
'func_group_7',
'expr.expr_instr',
'autocommit',
'connection',
'view',
'bug_different_tranid_intrx',
'createvarchar',
'createsql',
'plan_cache.plan_cache_multi_query',
'plan_cache.neg_sign',
'bug_setEmptyPwd',
'trx_6',
'add',
'minus',
'div',
'expr.mul',
'expr.expr_locate',
'trx_timeout_bug',
'a_trade_quick',
'view_2',
'big_trans_with_mutil_redo',
'idx_unique_many_idx_one_ins',
'two_order_by',
'parallel_insert',
'index_49_alias',
'index_47_NULL',
'index_14_hint',
'idx_const_basic_one_bool',
'trx_timeout',
'update_delete_limit_unique_key',
'a_trade_notify',
'information_schema',
'information_schema.information_schema-big',
'information_schema.information_schema_desc',
'inner_table.all_virtual_sys_parameter_stat',
'inner_table.schemata',
'inner_table.session_status',
'inner_table.all_virtual_upgrade_inspection',
'inner_table.session_variables',
'inner_table.character_sets',
'inner_table.table_constraints',
'inner_table.all_virtual_data_type_class',
'inner_table.collation_character_set_applicability',
'inner_table.table_privileges',
'inner_table.all_virtual_data_type',
'inner_table.collations',
'inner_table.tables',
'inner_table.tenant_virtual_event_name',
'inner_table.global_status',
'inner_table.tenant_virtual_partition_stat',
'inner_table.all_virtual_engine',
'inner_table.global_variables',
'inner_table.tenant_virtual_statname',
'inner_table.all_virtual_interm_result',
'inner_table.user_privileges',
'inner_table.partitions',
'inner_table.views',
'inner_table.all_virtual_tenant_partition_meta_table',
'inner_table.all_virtual_pg_partition_info',
'inner_table.inner_table_overall',
'create_using_type',
'topk',
'dist_nest_loop_simple',
'executor.basic',
'executor.trx',
'skyline.skyline_basic_mysql',
'skyline.skyline_complicate_mysql',
'skyline.skyline_business_mysql',
'skyline.skyline_index_back_mysql',
'trx.trans_consistency_type',
'trx.ts_source',
'transformer.transformer_add_limit_for_union',
'transformer.transformer_outer_join_simplification',
'transformer.transformer_predicate_deduce',
'transformer.transformer_simplify',
'global_index.global_index_select',
'replace',
'generated_column',
'window_function.farm',
'join.anti_semi_join',
'join.join_merge',
'executor.full_join',
'create_tablegroup_with_tablegroup_id',
'show_create_tablegroup',
'optimizer.bushy_leading_hint',
'optimizer.default_statistic',
'optimizer.equal_set_mysql',
'optimizer.union_sort_opt',
'optimizer.estimate_cost',
'optimizer.optimizer_bug12484726_mysql',
'optimizer.optimizer_bug13058938_mysql',
'optimizer.optimizer_bug15188850_mysql',
'optimizer.optimizer_bug15439492_mysql',
'optimizer.optimizer_bug16207306_mysql',
'optimizer.optimizer_bug17500767_mysql',
'optimizer.optimizer_bug18058771_mysql',
'optimizer.optimizer_bug18135868_mysql',
'optimizer.optimizer_bug18595461_mysql',
'optimizer.optimizer_bug19634818_mysql',
'optimizer.optimizer_bug21444584_mysql',
'subquery.idx_with_const_expr_21_subquery_dilang',
'subquery.optimizer_subquery_bug',
'subquery.order_by_subquery',
'subquery.rqg_ob_subquery_semijoin_nested_sql_parameterization',
'subquery.rqg_subquery_materialization_oop',
'subquery.rqg_subquery_semijoin_mismatch',
'subquery.rqg_subquery_semijoin_nested_timeout',
'subquery.spf_bug13044302',
'subquery.subquery',
'subquery.subquery_sj_firstmatch',
'subquery.subquery_sj_innodb',
'px.add_material',
'px.agg',
'px.alloc_material_for_producer_consumer_schedule_mode',
'px.default_open_px',
'px.join_hash',
'px.join_mj',
'px.join_nlj',
'px.join_pwj',
'px.sql_audit',
'px.tsc',
'px.union',
'px.unmatched_distribution',
'px.dml_use_px',
'px_gi_aff_bug_28565554',
'hierarchical_query.hierarchical_basic_mysql',
'foreign_key.dml_147',
'trx.pg_trans',
'trx.serializable_constrains',
'meta_info.meta_build_in_func_test',
'meta_info.meta_func',
'meta_info.meta_func_ceil',
'meta_info.meta_func_floor',
'meta_info.meta_test_func_return_type',
'meta_info.meta_const',
'alter.alter_log_archive_option',
'duplicate_table.test_duplicate_table',
'sql_throttle',
]
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
reboot_cases=['zz_alter_sys',
'create2',
'dump',
'calc_phy_plan_size',
'charset_and_collation',
'kill',
'killquery',
'read_config',
'select_frozen_version',
'create_index',
'create_syntax',
'bigvarchar_trans',
'bigvarchar_gmt',
'bigvarchar_1.25M_idx',
'expire_bug5328455',
'expire_trx',
'expire_index_trxdel',
'binary_protocol',
'error_msg',
'index_basic',
'index_quick',
'index_01_create_cols',
'teststricttime',
'step_merge_num',
'index_03_create_type',
'index_32_trx_rowkey_range_in',
'many_number_pk_large_than_58',
'virtual_table',
'merge_delete2',
'nested_loop_join_cache',
'nested_loop_join_cache_joinon',
'alter_table',
'expire_trx_modifydata',
'jdbc_ps_all_statement',
'expire_trx_nop',
'show',
'ps_lose_update_tr',
'join',
'table_consistent_mode',
'table_only_have_rowkey',
'resource_pool',
'bigvarchar_pri',
'bigvarchar_1.25M_time',
'testlimit_index',
'trx_expire_step_merge_num',
'trx_expire_alter_drop_add_col',
'trx_expire_idx_unique_merge_step',
'update_delete_many_data',
'bigvarchar_prejoin',
'zaddlmajor_gt64times',
'zcreate10000table',
'zcreateindex1000',
'sql_audit',
'trx_expire_more_oper',
'expire_trx_replace2',
'expire_trx2',
'zhuweng_thinking',
'lower_case_0',
'lower_case_1',
'lower_case_2',
'create_tenant_sys_var_option',
'show_tables',
'information_schema',
'index_11_dml_after_major_freeze',
'update_delete_limit_merge_idx_part',
'idx_with_const_expr26to30',
'idx_unique_many_idx_one_ins',
'inner_table.inner_table_overall',
'inner_table.all_virtual_partition_sstable_image_info',
'inner_table.all_virtual_sql_plan_statistics',
'inner_table.all_virtual_tenant_memstore_allocator_info',
'information_schema.information_schema_part',
'information_schema',
'information_schema.information_schema_select',
'information_schema.select_in_sys_and_normal_tenant',
'information_schema.information_schema_select_one_table',
'parallel_create_table',
'inner_table.all_partition_sstable_merge_info',
'tenant.resource_pool_new',
'charset.jp_create_db_utf8',
'information_schema.information_schema_desc',
'plan_base_line_for_schema',
'partition.partition_innodb',
'schema_bugs',
'schema_bugs2',
'schema_bug#8767674',
'schema_bug#8872003',
'spm.outline_no_hint_check_hit',
'spm.outline_concurrent',
'spm.outline_use',
'default_system_variable',
'ddl_on_core_table',
'time_zone.time_zone_variable',
'ddl_on_core_table_supplement',
'schema_change_merge',
'visible_index',
'progressive_merge',
'information_schema.information_schema2',
'rebalance_map',
'alipay_dns_4ob',
'replace.re_string_range_set',
'information_schema_part_oceanbase',
'ddl_on_inner_table',
'part_mg.alter_tablegroup_timeout',
'part_mg.alter_tablegroup_timeout1',
'part_mg.basic',
'part_mg.basic_partition_mg1',
'part_mg.tablegroup_split_with_drop_table',
'zcreate1wpartiton',
'tenant',
'tenant2'
]
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
succ_filter=[
'tenant2',
'tenant3',
'tenant_diversity_replica_num_boundary',
'tenant_space_table_refresh',
'tenant',
'tenant.create_many_tin',
'tenant.resource_not_enough',
'tenant.resource_pool_new',
'zz_alter_sys',
'zcreateindex1000',
'zcreate1wpartiton',
'zcreate10000table',
'zaddlmajor_gt64times',
'materialized_view.collect_mv',
'materialized_view.materialized_view',
'materialized_view.mv_basic',
'materialized_view.mv_basic2',
'materialized_view.mv_basic3',
'materialized_view.mv_storage1',
'materialized_view.mv_storage2',
'materialized_view.mv_storage3',
'materialized_view.mv_storage_bug10101539',
'materialized_view.mv_storage_bug10138064',
'materialized_view.mv_storage_bug10414345',
'materialized_view.mv_storage_bug10575767',
'materialized_view.mv_storage_large',
'materialized_view.bug#10432650',
'materialized_view.join_col_collation',
'materialized_view.table_option',
'materialized_view.non_sys_tenant',
'plan_cache.alter_system_flush',
'plan_cache.plan_cache_retry',
'ocp',
'database.database_create',
'database.database_charset',
'database.database_name',
'ddl.multiple_pool',
'outline.outline_use',
'inner_table.all_virtual_core_inner_table',
'alter_system',
#'executor.join_with_part_ddl',
'ctype_utf8mb4_innodb',
'ctxcat_index_with_auction_title',
'ctxcat_index_basic',
'compress_data',
'plan_base_line_for_schema',
'plan_exchange',
'plan_expression_slave',
'spf_bug13044302',
'window_function_mysql.window_functions',
'merge_into.merge_insert',
'merge_into.merge_into_normal',
'merge_into.merge_subquery',
'merge_into.merge_update',
'pl.sp_mysql',
'kill_transaction',
'update.update_ignore_multi_row',
'update.update_ignore_multi_stmt',
'update.update_ignore_one_row',
]
case 1: commit
connect conn1,$OBMYSQL_MS0,$OBMYSQL_USR,$OBMYSQL_PWD,test,$OBMYSQL_PORT;
connection conn1;
show variables like 'autocommit';
Variable_name Value
autocommit ON
drop table if exists t1;
create table t1 (c1 int primary key, c2 varchar(1024));
set autocommit=0;
insert into t1 values (1, '中国');
select * from t1 where c1 = 1 for update;
c1 c2
1 中国
commit;
set autocommit=1;
select * from t1;
c1 c2
1 中国
disconnect conn1;
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import re
import os
import time
import shlex
from subprocess import Popen, PIPE
from copy import deepcopy
from ssh import LocalClient
from tool import DirectoryUtil
inner_dir = os.path.split(__file__)[0]
inner_test_dir = os.path.join(inner_dir, 't')
inner_result_dir = os.path.join(inner_dir, 'r')
inner_suite_dir = os.path.join(inner_dir, 'test_suite')
class Arguments:
def add(self, k, v=None):
self.args.update({k:v})
def __str__(self):
s = []
for k,v in self.args.items():
if v != None:
if re.match('^--\w', k):
s.append(' %s=%s' % (k, v))
else:
s.append(' %s %s' % (k, v))
else:
s.append(' %s' % k)
return ' '.join(s)
def __init__(self, opt):
self.args = dict()
if 'connector' in opt and 'java' in opt and opt['java']:
self.add('--connector', opt['connector'])
self.add('--host', opt['host'])
self.add('--port', opt['port'])
self.add('--tmpdir', opt['tmp_dir'])
self.add('--logdir', '%s/log' % opt['var_dir'])
DirectoryUtil.mkdir(opt['tmp_dir'])
DirectoryUtil.mkdir('%s/log' % opt['var_dir'])
self.add('--silent')
# our mysqltest doesn't support this option
# self.add('--skip-safemalloc')
self.add('--user', 'root')
if 'user' in opt and opt['user']:
user = opt['user']
if 'connector' not in opt or opt['connector'] == 'ob':
user = user + '@' + opt['case_mode']
self.add('--user', user)
if 'password' in opt and opt['password']:
self.add('--password', opt['password'])
if 'full_user' in opt and opt['full_user']:
self.add('--full_username', opt['full_user'].replace('sys',opt['case_mode']))
if 'tenant' in opt and opt['tenant']:
self.add('--user', 'root@' + opt['tenant'])
self.add('--password', '')
if 'cluster' in opt and opt['cluster']:
self.add('--full_username', 'root@' + opt['tenant'] + '#' + opt['cluster'])
else:
self.add('--full_username', 'root@' + opt['tenant'])
if 'rslist_url' in opt and opt['rslist_url']:
self.add('--rslist_url', opt['rslist_url'])
if 'database' in opt and opt['database']:
self.add('--database', opt['database'])
if 'charsetdsdir' in opt and opt['charsetdsdir']:
self.add('--character-sets-dir', opt['charsetsdir'])
if 'basedir' in opt and opt['basedir']:
self.add('--basedir', opt['basedir'])
if 'use_px' in opt and opt['use_px']:
self.add('--use-px')
if 'force_explain_as_px' in opt and opt['force_explain_as_px']:
self.add('--force-explain-as-px')
if 'force-explain-as-no-px' in opt:
self.add('--force-explain-as-no-px')
if 'mark_progress' in opt and opt['mark_progress']:
self.add('--mark-progress')
if 'ps_protocol' in opt and opt['ps_protocol']:
self.add('--ps-protocol')
if 'sp_protocol' in opt and opt['sp_protocol']:
self.add('--sp-protocol')
if 'view_protocol' in opt and opt['view_protocol']:
self.add('--view-protocol')
if 'cursor_protocol' in opt and opt['cursor_protocol']:
self.add('--cursor-protocol')
self.add('--timer-file', '%s/log/timer' % opt['var_dir'])
if 'compress' in opt and opt['compress']:
self.add('--compress')
if 'sleep' in opt and opt['sleep']:
self.add('--sleep', '%d' % opt['sleep'])
if 'max_connections' in opt and opt['max_connections']:
self.add('--max-connections', '%d' % opt['max_connections'])
if 'test_file' in opt and opt['test_file']:
self.add('--test-file', opt['test_file'])
self.add('--tail-lines', ('tail_lines' in opt and opt['tail_lines']) or 20)
if 'oblog_diff' in opt and opt['oblog_diff']:
self.add('--oblog_diff')
if 'record' in opt and opt['record'] and 'record_file' in opt and opt['record_file']:
self.add('--record')
self.add('--result-file', opt['record_file'])
else: # diff result & file
self.add('--result-file', opt['result_file'])
def _return(test, cmd, result):
return {'name' : test, 'ret' : result.code, 'output' : result.stdout, 'cmd' : cmd, 'errput': result.stderr}
def run_test(plugin_context, test, env, *args, **kwargs):
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
stdio.start_loading('Runing case: %s' % test)
test_ori = test
opt = {}
for key in env:
if key != 'cursor':
opt[key] = env[key]
opt['connector'] = 'ob'
opt['mysql_mode'] = True
mysqltest_bin = opt['mysqltest_bin'] if 'mysqltest_bin' in opt and opt['mysqltest_bin'] else 'mysqltest'
soft = 3600
buffer = 0
if 'source_limit' in opt and opt['source_limit']:
if test_ori in opt['source_limit']:
soft = opt['source_limit'][test_ori]
elif 'g.default' in opt['source_limit']:
soft = opt['source_limit']['g.default']
if 'g.buffer' in opt['source_limit']:
buffer = opt['source_limit']['g.buffer']
case_timeout = soft + buffer
opt['filter'] = 'c'
if 'profile' in args:
opt['profile'] = True
opt['record'] = True
if 'ps' in args:
opt['filter'] = opt['filter'] + 'p'
if 'cluster-mode' in opt and opt['cluster-mode'] in ['slave', 'proxy']:
opt['filter'] = opt['cluster-mode']
# support explain select w/o px hit
# force-explain-xxxx 的结果文件目录为
# - explain_r/mysql
# 其余的结果文件目录为
# - r/mysql
suffix = ''
opt_explain_dir = ''
if 'force-explain-as-px' in opt:
suffix = '.use_px'
opt_explain_dir = 'explain_r/'
elif 'force-explain-as-no-px' in opt:
suffix = '.no_use_px'
opt_explain_dir = 'explain_r/'
opt['case_mode'] = 'mysql'
if 'mode' not in opt:
opt['mode'] = 'both'
if opt['mode'] == 'mysql':
opt['case_mode'] = opt['mode']
if opt['mode'] == 'both':
if test.endswith('_mysql'):
opt['case_mode'] = 'mysql'
get_result_dir = lambda path: os.path.join(path, opt_explain_dir, opt['case_mode'])
opt['result_dir'] = get_result_dir(opt['result_dir'])
if opt['filter'] == 'slave':
opt['slave_cmp'] = 1
result_file = os.path.join(opt['result_dir'], test + suffix + '.slave.result')
if os.path.exists(result_file):
opt['slave_cmp'] = 0
opt['result_file'] = result_file
opt['record_file'] = os.path.join(opt['result_dir'], test + suffix + '.record')
if len(test.split('.')) == 2:
suite_name, test= test.split('.')
opt['result_dir'] = get_result_dir(os.path.join(opt['suite_dir'], suite_name, 'r'))
opt['test_file'] = os.path.join(opt['suite_dir'], suite_name, 't', test + '.test')
if not os.path.isfile(opt['test_file']):
inner_test_file = os.path.join(inner_suite_dir, suite_name, 't', test + '.test')
if os.path.isfile(inner_test_file):
opt['test_file'] = inner_test_file
opt['result_dir'] = get_result_dir(os.path.join(inner_suite_dir, suite_name, 'r'))
else:
opt['test_file'] = os.path.join(opt['test_dir'], test + '.test')
if not os.path.isfile(opt['test_file']):
inner_test_file = os.path.join(inner_test_dir, test + '.test')
if os.path.isfile(inner_test_file):
opt['test_file'] = inner_test_file
opt['result_dir'] = get_result_dir(inner_result_dir)
opt['result_file'] = os.path.join(opt['result_dir'], test + suffix + '.result')
server_engine_cmd = '''obclient -h%s -P%s -uroot -Doceanbase -e "select value from __all_virtual_sys_parameter_stat where name like '_enable_static_typing_engine';"''' % (opt['host'], opt['port'])
result = LocalClient.execute_command(server_engine_cmd, env={}, timeout=3600, stdio=stdio)
if not result:
stdio.error('engine failed, exit code %s. error msg: %s' % (result.code, result.stderr))
env = {
'OBMYSQL_PORT': str(opt['port']),
'OBMYSQL_MS0': str(opt['host']),
'OBMYSQL_PWD': str(opt['password']),
'OBMYSQL_USR': opt['user'],
'PATH': os.getenv('PATH')
}
if 'case_mode' in opt and opt['case_mode']:
env['TENANT'] = opt['case_mode']
if 'user' in opt and opt['user']:
env['OBMYSQL_USR'] = str(opt['user'] + '@' + opt['case_mode'])
else:
env['OBMYSQL_USR'] = 'root'
if 'java' in opt:
opt['connector'] = 'ob'
LocalClient.execute_command('obclient -h %s -P %s -uroot -Doceanbase -e "alter system set _enable_static_typing_engine = True;select sleep(2);"' % (opt['host'], opt['port']), stdio=stdio)
start_time = time.time()
cmd = 'timeout %s %s %s' % (case_timeout, mysqltest_bin, str(Arguments(opt)))
try:
stdio.verbose('local execute: %s ' % cmd, end='')
p = Popen(shlex.split(cmd), env=env, stdout=PIPE, stderr=PIPE)
output, errput = p.communicate()
retcode = p.returncode
if retcode == 124:
output = ''
if 'source_limit' in opt and 'g.buffer' in opt['source_limit']:
errput = "%s secs out of soft limit (%s secs), sql may be hung, please check" % (opt['source_limit']['g.buffer'], case_timeout)
else:
errput = "%s seconds timeout, sql may be hung, please check" % case_timeout
elif isinstance(errput, bytes):
errput = errput.decode(errors='replace')
except Exception as e:
errput = str(e)
output = ''
retcode = 255
verbose_msg = 'exited code %s' % retcode
if retcode:
verbose_msg += ', error output:\n%s' % errput
stdio.verbose(verbose_msg)
cost = time.time() - start_time
LocalClient.execute_command('obclient -h %s -P %s -uroot -Doceanbase -e "alter system set _enable_static_typing_engine = False;select sleep(2);"' % (opt['host'], opt['port']), stdio=stdio)
result = {"name" : test_ori, "ret" : retcode, "output" : output, "cmd" : cmd, "errput" : errput, 'cost': cost}
stdio.stop_loading('fail' if retcode else 'succeed')
return plugin_context.return_true(result=result)
--disable_query_log
set @@session.explicit_defaults_for_timestamp=off;
--enable_query_log
# owner: jim.wjh
# owner group: SQL3
# description: foobar
--echo case 1: commit
connect (conn1,$OBMYSQL_MS0,$OBMYSQL_USR,$OBMYSQL_PWD,test,$OBMYSQL_PORT);
connection conn1;
show variables like 'autocommit';
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (c1 int primary key, c2 varchar(1024));
set autocommit=0;
insert into t1 values (1, '中国');
select * from t1 where c1 = 1 for update;
commit;
set autocommit=1;
select * from t1;
disconnect conn1;
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def bootstrap(plugin_context, cursor, *args, **kwargs):
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
for key in ['observer_sys_password', 'obproxy_sys_password']:
if key in server_config and server_config[key]:
try:
sql = 'alter proxyconfig set %s = %%s' % key
value = server_config[key]
stdio.verbose('execute sql: %s' % (sql % value))
cursor[server].execute(sql, [value])
except:
stdio.exception('execute sql exception')
stdio.warm('failed to set %s for obproxy(%s)' % (key, server))
plugin_context.return_true()
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import sys
import time
if sys.version_info.major == 2:
import MySQLdb as mysql
else:
import pymysql as mysql
def _connect(ip, port, user):
if sys.version_info.major == 2:
db = mysql.connect(host=ip, user=user, port=int(port))
cursor = db.cursor(cursorclass=mysql.cursors.DictCursor)
else:
db = mysql.connect(host=ip, user=user, port=int(port), cursorclass=mysql.cursors.DictCursor)
cursor = db.cursor()
return db, cursor
def connect(plugin_context, target_server=None, sys_root=True, *args, **kwargs):
count = 10
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
if target_server:
servers = [target_server]
server_config = cluster_config.get_server_conf(target_server)
stdio.start_loading('Connect obproxy(%s:%s)' % (target_server, server_config['listen_port']))
else:
servers = cluster_config.servers
stdio.start_loading('Connect to obproxy')
if sys_root:
user = 'root@proxysys'
else:
user = 'root'
dbs = {}
cursors = {}
while count and servers:
count -= 1
tmp_servers = []
for server in servers:
try:
server_config = cluster_config.get_server_conf(server)
db, cursor = _connect(server.ip, server_config['listen_port'], user)
dbs[server] = db
cursors[server] = cursor
except:
tmp_servers.append(server)
pass
servers = tmp_servers
servers and time.sleep(3)
if count and servers:
stdio.stop_loading('fail')
return plugin_context.return_false()
else:
stdio.stop_loading('succeed')
if target_server:
return plugin_context.return_true(connect=dbs[target_server], cursor=cursors[target_server])
else:
return plugin_context.return_true(connect=dbs, cursor=cursors)
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def destroy(plugin_context, *args, **kwargs):
def clean(server, path):
client = clients[server]
ret = client.execute_command('rm -fr %s/* %s/.conf' % (path, path))
if not ret:
# pring stderror
global_ret = False
stdio.warn('fail to clean %s:%s', (server, path))
else:
stdio.verbose('%s:%s cleaned' % (server, path))
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
global_ret = True
stdio.start_loading('obproxy work dir cleaning')
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
stdio.verbose('%s work path cleaning', server)
clean(server, server_config['home_path'])
if global_ret:
stdio.stop_loading('succeed')
plugin_context.return_true()
else:
stdio.stop_loading('fail')
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def display(plugin_context, cursor, *args, **kwargs):
stdio = plugin_context.stdio
cluster_config = plugin_context.cluster_config
servers = cluster_config.servers
result = []
for server in servers:
data = {
'ip': server.ip,
'status': 'inactive',
'listen_port': '-',
'prometheus_listen_port': '-'
}
try:
cursor[server].execute('show proxyconfig like "%port"')
for item in cursor[server].fetchall():
if item['name'] in data:
data[item['name']] = item['value']
data['status'] = 'active'
except:
stdio.exception('')
pass
result.append(data)
stdio.print_list(result, ['ip', 'port', 'prometheus_port', 'status'],
lambda x: [x['ip'], x['listen_port'], x['prometheus_listen_port'], x['status']], title='obproxy')
plugin_context.return_true()
- src_path: ./home/admin/obproxy-3.1.0/bin/obproxy
target_path: bin/obproxy
type: bin
mode: 755
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def init(plugin_context, *args, **kwargs):
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
global_ret = True
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
home_path = server_config['home_path']
stdio.print('%s init cluster work home', server)
if not client.execute_command('mkdir -p %s/run' % (home_path)):
global_ret = False
stdio.print('fail to init %s home path', server)
global_ret and plugin_context.return_true()
- name: home_path
require: true
type: STRING
need_restart: true
description_en: the directory for the work data file
description_local: ObProxy工作目录
- name: listen_port
require: true
type: INT
default: 2883
min_value: 1025
max_value: 65535
need_restart: true
description_en: port number for mysql connection
description_local: SQL服务协议端口号
- name: prometheus_listen_port
require: true
type: INT
default: 2884
min_value: 1025
max_value: 65535
need_restart: true
description_en: obproxy prometheus listen port
description_local: SQL服务协议端口号
- name: appname
require: false
type: STRING
need_restart: true
description_en: application name
description_local: 应用名
- name: cluster_name
require: false
type: STRING
need_restart: true
description_en: observer cluster name
description_local: 代理的observer集群名
- name: rs_list
type: ARRAY
need_restart: true
description_en: root server list(format ip:sql_port)
description_local: observer列表(格式 ip:sql_port)
- name: refresh_json_config
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: force update json info if refresh_json_config is true
- name: refresh_rslist
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: when refresh config server, update all rslist if refresh_rslist is true
- name: refresh_idc_list
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: when refresh config server, update all idc list if refresh_idc_list is true
- name: refresh_config
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: when table processor do check work, update all proxy config if refresh_config is true
- name: proxy_info_check_interval
type: TIME
default: 60s
min_value: 1s
max_value: 1h
need_restart: false
description_en: proxy info check task interval, [1s, 1h]
- name: cache_cleaner_clean_interval
type: TIME
default: 20s
min_value: 1s
max_value: 1d
need_restart: false
description_en: the interval for cache cleaner to clean cache, [1s, 1d]
- name: server_state_refresh_interval
type: TIME
default: 20s
min_value: 10ms
max_value: 1h
need_restart: false
description_en: the interval to refresh server state for getting zone or server newest state, [10ms, 1h]
- name: metadb_server_state_refresh_interval
type: TIME
default: 60s
min_value: 10ms
max_value: 1h
need_restart: false
description_en: the interval to refresh metadb server state for getting zone or server newest state, [10ms, 1h]
- name: config_server_refresh_interval
type: TIME
default: 60s
min_value: 10s
max_value: 1d
need_restart: false
description_en: config server info refresh task interval, [10s, 1d]
- name: idc_list_refresh_interval
type: TIME
default: 2h
min_value: 10s
max_value: 1d
need_restart: false
description_en: the interval to refresh idc list for getting newest region-idc, [10s, 1d]
- name: stat_table_sync_interval
type: TIME
default: 60s
min_value: 0s
max_value: 1d
need_restart: false
description_en: update sync statistic to ob_all_proxy_stat table interval, [0s, 1d], 0 means disable, if set a negative value, proxy treat it as 0
- name: stat_dump_interval
type: TIME
default: 6000s
min_value: 0s
max_value: 1d
need_restart: false
description_en: dump statistic in log interval, [0s, 1d], 0 means disable, if set a negative value, proxy treat it as 0
- name: partition_location_expire_relative_time
type: INT
default: 0
min_value: -36000000
max_value: 36000000
need_restart: false
description_en: the unit is ms, 0 means do not expire, others will expire partition location base on relative time
- name: cluster_count_high_water_mark
type: INT
default: 256
min_value: 2
max_value: 102400
need_restart: false
description_en: if cluster count is greater than this water mark, cluser will be kicked out by LRU
- name: cluster_expire_time
type: TIME
default: 1d
min_value: 0
max_value:
need_restart: false
description_en: cluster resource expire time, 0 means never expire,cluster will be deleted if it has not been accessed for more than the time,[0, ]
- name: fetch_proxy_bin_random_time
type: TIME
default: 300s
min_value: 1s
max_value: 1h
need_restart: false
description_en: max random waiting time of fetching proxy bin in hot upgrade, [1s, 1h]
- name: fetch_proxy_bin_timeout
type: TIME
default: 120s
min_value: 1s
max_value: 1200s
need_restart: false
description_en: default hot upgrade fetch binary timeout, proxy will stop fetching after such long time, [1s, 1200s]
- name: hot_upgrade_failure_retries
type: INT
default: 5
min_value: 1
max_value: 20
need_restart: false
description_en: default hot upgrade failure retries, proxy will stop handle hot_upgrade command after such retries, [1, 20]
- name: hot_upgrade_rollback_timeout
type: TIME
default: 24h
min_value: 1s
max_value: 30d
need_restart: false
description_en: default hot upgrade rollback timeout, proxy will do rollback if receive no rollback command in such long time, [1s, 30d]
- name: hot_upgrade_graceful_exit_timeout
type: TIME
default: 120s
min_value: 0s
max_value: 30d
need_restart: false
description_en: graceful exit timeout, [0s, 30d], if set a value <= 0, proxy treat it as 0
- name: delay_exit_time
type: TIME
default: 100ms
min_value: 100ms
max_value: 500ms
need_restart: false
description_en: delay exit time, [100ms,500ms]
- name: log_file_percentage
type: INT
default: 80
min_value: 0
max_value: 100
need_restart: false
description_en: max percentage of avail size occupied by proxy log file, [0, 90], 0 means ignore such limit
- name: log_cleanup_interval
type: TIME
default: 10m
min_value: 5s
max_value: 30d
need_restart: false
description_en: log file clean up task schedule interval, set 1 day or longer, [5s, 30d]
- name: log_dir_size_threshold
type: CAPACITY
default: 64GB
min_value: 256M
max_value: 1T
need_restart: false
description_en: max usable space size of log dir, used to decide whether should clean up log file, [256MB, 1T]
- name: need_convert_vip_to_tname
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: convert vip to tenant name, which is useful in cloud
- name: long_async_task_timeout
type: TIME
default: 60s
min_value: 1s
max_value: 1h
need_restart: false
description_en: long async task timeout, [1s, 1h]
- name: short_async_task_timeout
type: TIME
default: 5s
min_value: 1s
max_value: 1h
need_restart: false
description_en: short async task timeout, [1s, 1h]
- name: username_separator
type: STRING_LIST
default: :;-;.
min_value:
max_value:
need_restart: false
description_en: username separator
- name: enable_client_connection_lru_disconnect
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: if client connections reach throttle, true is that new connection will be accepted, and eliminate lru client connection, false is that new connection will disconnect, and err packet will be returned
- name: client_max_connections
type: INT
default: 8192
min_value: 0
max_value: 65535
need_restart: false
description_en: client max connections for one obproxy, [0, 65535]
- name: observer_query_timeout_delta
type: TIME
default: 20s
min_value: 1s
max_value: 30s
need_restart: false
description_en: the delta value for @@ob_query_timeout, to cover net round trip time(proxy<->server) and task schedule time(server), [1s, 30s]
- name: enable_cluster_checkout
type: BOOL
default: true
min_value: false
max_value: true
need_restart: false
description_en: if enable cluster checkout, proxy will send cluster name when login and server will check it
- name: enable_proxy_scramble
type: BOOL
default: false
min_value: false
max_value: true
need_restart: false
description_en: if enable proxy scramble, proxy will send client its variable scramble num, not support old observer
- name: enable_client_ip_checkout
type: BOOL
default: true
min_value: false
max_value: true
need_restart: false
description_en: if enabled, proxy send client ip when login
- name: connect_observer_max_retries
type: INT
default: 3
min_value: 2
max_value: 5
need_restart: false
description_en: max retries to do connect
- name: frequent_accept
type: BOOL
default: true
min_value: false
max_value: true
need_restart: true
description_en: frequent accept
- name: net_accept_threads
type: INT
default: 2
min_value: 0
max_value: 8
need_restart: true
description_en: net accept threads num, [0, 8]
- name: stack_size
type: CAPACITY
default: 1MB
min_value: 1MB
max_value: 10MB
need_restart: true
description_en: stack size of one thread, [1MB, 10MB]
- name: work_thread_num
type: INT
default: 128
min_value: 1
max_value: 128
need_restart: true
description_en: proxy work thread num or max work thread num when automatic match, [1, 128]
- name: task_thread_num
type: INT
default: 2
min_value: 1
max_value: 4
need_restart: true
description_en: proxy task thread num, [1, 4]
- name: block_thread_num
type: INT
default: 1
min_value: 1
max_value: 4
need_restart: true
description_en: proxy block thread num, [1, 4]
- name: grpc_thread_num
type: INT
default: 8
min_value: 8
max_value: 16
need_restart: true
description_en: proxy grpc thread num, [8, 16]
- name: grpc_client_num
type: INT
default: 9
min_value: 9
max_value: 16
need_restart: true
description_en: proxy grpc client num, [9, 16]
- name: automatic_match_work_thread
type: BOOL
default: true
min_value: false
max_value: true
need_restart: true
description_en: ignore work_thread_num configuration item, use the count of cpu for current proxy work thread num
- name: enable_strict_kernel_release
require: true
type: BOOL
default: false
min_value: false
max_value: true
need_restart: true
description_en: If is true, proxy only support 5u/6u/7u redhat. Otherwise no care kernel release, and proxy maybe unstable
- name: enable_cpu_topology
type: BOOL
default: true
min_value: false
max_value: true
need_restart: true
description_en: enable cpu topology, work threads bind to cpu
- name: local_bound_ip
type: STRING
default: 0.0.0.0
max_value: ''
min_value: ''
need_restart: true
description_en: local bound ip(any)
- name: obproxy_config_server_url
type: STRING
default: ''
max_value: ''
min_value: ''
need_restart: true
description_en: url of config info(rs list and so on)
- name: proxy_service_mode
type: STRING
default: ''
max_value: ''
min_value: ''
need_restart: true
description_en: "proxy deploy and service mode: 1.client(default); 2.server"
- name: proxy_id
type: INT
default: 0
max_value: 255
min_value: 0
need_restart: true
description_en: used to identify each obproxy, it can not be zero if proxy_service_mode is server
- name: app_name
type: STRING
default: undefined
max_value: ''
min_value: ''
need_restart: true
description_en: current application name which proxy works for, need defined, only modified when restart
- name: enable_metadb_used
type: BOOL
default: true
max_value: true
min_value: false
need_restart: true
description_en: use MetaDataBase when proxy run
- name: rootservice_cluster_name
type: STRING
default: undefined
max_value: ''
min_value: ''
need_restart: true
description_en: default cluster name for rootservice_list
- name: prometheus_cost_ms_unit
type: BOOL
default: true
max_value: true
min_value: false
need_restart: true
description_en: update sync metrics to prometheus exposer interval, [1s, 1h], 0 means disable, if set a negative value, proxy treat it as 0
- name: bt_retry_times
type: INT
default: 3
min_value: 0
max_value: 100
need_restart: true
description_en: beyond trust sdk retry times
- name: obproxy_sys_password
type: STRING
default: ''
max_value: ''
min_value: ''
need_restart: false
description_en: password pf obproxy sys user
- name: observer_sys_password
type: STRING
default: ''
max_value: ''
min_value: ''
need_restart: false
description_en: password of observer proxyro user
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def reload(plugin_context, cursor, new_cluster_config, *args, **kwargs):
stdio = plugin_context.stdio
cluster_config = plugin_context.cluster_config
servers = cluster_config.servers
cluster_server = {}
change_conf = {}
global_change_conf = {}
global_ret = True
for server in servers:
change_conf[server] = {}
stdio.verbose('get %s old configuration' % (server))
config = cluster_config.get_server_conf_with_default(server)
stdio.verbose('get %s new configuration' % (server))
new_config = new_cluster_config.get_server_conf_with_default(server)
stdio.verbose('get %s cluster address' % (server))
cluster_server[server] = '%s:%s' % (server.ip, config['listen_port'])
stdio.verbose('compare configuration of %s' % (server))
for key in new_config:
if key not in config or config[key] != new_config[key]:
change_conf[server][key] = new_config[key]
if key not in global_change_conf:
global_change_conf[key] = 1
else:
global_change_conf[key] += 1
servers_num = len(servers)
stdio.verbose('apply new configuration')
success_conf = {}
sql = ''
for key in global_change_conf:
success_conf[key] = []
for server in servers:
if key not in change_conf[server]:
continue
try:
sql = 'alter proxyconfig set %s = %%s' % key
value = change_conf[server][key]
stdio.verbose('execute sql: %s' % (sql % value))
cursor[server].execute(sql, [value])
success_conf[key].append(server)
except:
global_ret = False
stdio.exception('execute sql exception: %s' % sql)
for key in success_conf:
if global_change_conf[key] == servers_num == len(success_conf):
cluster_config.update_global_conf(key, value, False)
for server in success_conf[key]:
value = change_conf[server][key]
cluster_config.update_server_conf(server,key, value, False)
return plugin_context.return_true() if global_ret else None
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import time
stdio = None
def get_port_socket_inode(client, port):
port = hex(port)[2:].zfill(4).upper()
cmd = "cat /proc/net/{tcp,udp} | awk -F' ' '{print $2,$10}' | grep '00000000:%s' | awk -F' ' '{print $2}' | uniq" % port
res = client.execute_command(cmd)
if not res or not res.stdout.strip():
return False
stdio.verbose(res.stdout)
return res.stdout.strip().split('\n')
def confirm_port(client, pid, port):
socket_inodes = get_port_socket_inode(client, port)
if not socket_inodes:
return False
ret = client.execute_command("ls -l /proc/%s/fd/ |grep -E 'socket:\[(%s)\]'" % (pid, '|'.join(socket_inodes)))
if ret and ret.stdout.strip():
return True
return False
def confirm_command(client, pid, command):
command = command.replace(' ', '').strip()
if client.execute_command('cmd=`cat /proc/%s/cmdline`; if [ "$cmd" != "%s" ]; then exit 1; fi' % (pid, command)):
return True
return False
def confirm_home_path(client, pid, home_path):
if client.execute_command('path=`ls -l /proc/%s | grep cwd | awk -F\'-> \' \'{print $2}\'`; if [ "$path" != "%s" ]; then exit 1; fi' %
(pid, home_path)):
return True
return False
def is_started(client, remote_bin_path, port, home_path, command):
username = client.config.username
ret = client.execute_command('pgrep -u %s -f "^%s"' % (username, remote_bin_path))
if not ret:
return False
pids = ret.stdout.strip()
if not pids:
return False
pids = pids.split('\n')
for pid in pids:
if confirm_port(client, pid, port):
break
else:
return False
return confirm_home_path(client, pid, home_path) and confirm_command(client, pid, command)
def start(plugin_context, home_path, repository_dir, *args, **kwargs):
global stdio
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
clusters_cmd = {}
real_cmd = {}
pid_path = {}
remote_bin_path = {}
need_bootstrap = True
bin_path = os.path.join(repository_dir, 'bin/obproxy')
error = False
for server in cluster_config.servers:
client = clients[server]
server_config = cluster_config.get_server_conf(server)
if 'rs_list' not in server_config and 'obproxy_config_server_url' not in server_config:
error = True
stdio.error('%s need config "rs_list" or "obproxy_config_server_url"' % server)
if error:
return plugin_context.return_false()
stdio.start_loading('Start obproxy')
for server in cluster_config.servers:
client = clients[server]
remote_home_path = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_bin_path[server] = bin_path.replace(home_path, remote_home_path)
server_config = cluster_config.get_server_conf(server)
pid_path[server] = "%s/run/obproxy-%s-%s.pid" % (server_config['home_path'], server.ip, server_config["listen_port"])
not_opt_str = [
'listen_port',
'prometheus_listen_port',
'rs_list',
'cluster_name'
]
get_value = lambda key: "'%s'" % server_config[key] if isinstance(server_config[key], str) else server_config[key]
opt_str = []
for key in server_config:
if key != 'home_path' and key not in not_opt_str:
value = get_value(key)
opt_str.append('%s=%s' % (key, value))
cmd = ['-o %s' % ','.join(opt_str)]
for key in not_opt_str:
if key in server_config:
value = get_value(key)
cmd.append('--%s %s' % (key, value))
real_cmd[server] = '%s %s' % (remote_bin_path[server], ' '.join(cmd))
clusters_cmd[server] = 'cd %s; %s' % (server_config['home_path'], real_cmd[server])
for server in clusters_cmd:
client = clients[server]
server_config = cluster_config.get_server_conf(server)
port = int(server_config["listen_port"])
prometheus_port = int(server_config["prometheus_listen_port"])
stdio.verbose('%s port check' % server)
remote_pid = client.execute_command("cat %s" % pid_path[server]).stdout.strip()
cmd = real_cmd[server].replace('\'', '')
if remote_pid:
ret = client.execute_command('cat /proc/%s/cmdline' % remote_pid)
if ret:
if ret.stdout.strip() == cmd:
continue
stdio.stop_loading('fail')
stdio.error('%s:%s port is already used' % (server.ip, port))
return plugin_context.return_false()
stdio.verbose('starting %s obproxy', server)
ret = client.execute_command(clusters_cmd[server])
if not ret:
stdio.stop_loading('fail')
stdio.error('failed to start %s obproxy: %s' % (server, ret.stderr))
return plugin_context.return_false()
client.execute_command('''ps -aux | grep '%s' | grep -v grep | awk '{print $2}' > %s''' % (cmd, pid_path[server]))
stdio.stop_loading('succeed')
stdio.start_loading('obproxy program health check')
time.sleep(3)
failed = []
fail_time = 0
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
stdio.verbose('%s program health check' % server)
remote_pid = client.execute_command("cat %s" % pid_path[server]).stdout.strip()
if remote_pid:
for pid in remote_pid.split('\n'):
confirm = confirm_port(client, pid, int(server_config["listen_port"]))
if confirm:
stdio.verbose('%s obproxy[pid: %s] started', server, pid)
client.execute_command('echo %s > %s' % (pid, pid_path[server]))
break
else:
fail_time += 1
if fail_time == len(remote_pid.split('\n')):
failed.append('failed to start %s obproxy' % server)
else:
stdio.verbose('No such file: %s' % pid_path[server])
failed.append('failed to start %s obproxy' % server)
if failed:
stdio.stop_loading('fail')
for msg in failed:
stdio.warn(msg)
plugin_context.return_false()
else:
stdio.stop_loading('succeed')
plugin_context.return_true(need_bootstrap=True)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
stdio = None
def get_port_socket_inode(client, port):
port = hex(port)[2:].zfill(4).upper()
cmd = "cat /proc/net/{tcp,udp} | awk -F' ' '{print $2,$10}' | grep '00000000:%s' | awk -F' ' '{print $2}' | uniq" % port
res = client.execute_command(cmd)
if not res or not res.stdout.strip():
return False
stdio.verbose(res.stdout)
return res.stdout.strip().split('\n')
def start_check(plugin_context, alert_lv='error', *args, **kwargs):
global stdio
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
success = True
alert = getattr(stdio, alert_lv)
servers_port = {}
for server in cluster_config.servers:
ip = server.ip
client = clients[server]
if ip not in servers_port:
servers_port[ip] = {}
ports = servers_port[ip]
server_config = cluster_config.get_server_conf_with_default(server)
stdio.verbose('%s port check' % server)
for key in ['listen_port', 'prometheus_listen_port']:
port = int(server_config[key])
if port in ports:
alert('%s: %s port is used for %s\'s %s' % (server, port, ports[port]['server'], ports[port]['key']))
success = False
continue
ports[port] = {
'server': server,
'key': key
}
if get_port_socket_inode(client, port):
alert('%s:%s port is already used' % (ip, port))
success = False
if success:
plugin_context.return_true()
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def status(plugin_context, *args, **kwargs):
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
cluster_status = {}
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
cluster_status[server] = 0
if 'home_path' not in server_config:
stdio.print('%s home_path is empty', server)
continue
remote_pid_path = '%s/run/obproxy-%s-%s.pid' % (server_config["home_path"], server.ip, server_config["listen_port"])
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid and client.execute_command('ls /proc/%s' % remote_pid):
cluster_status[server] = 1
return plugin_context.return_true(cluster_status=cluster_status)
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import time
stdio = None
def get_port_socket_inode(client, port):
port = hex(port)[2:].zfill(4).upper()
cmd = "cat /proc/net/{tcp,udp} | awk -F' ' '{print $2,$10}' | grep '00000000:%s' | awk -F' ' '{print $2}' | uniq" % port
res = client.execute_command(cmd)
inode = res.stdout.strip()
if not res or not inode:
return False
stdio.verbose("inode: %s" % inode)
return inode.split('\n')
def confirm_port(client, pid, port):
socket_inodes = get_port_socket_inode(client, port)
if not socket_inodes:
return False
ret = client.execute_command("ls -l /proc/%s/fd/ |grep -E 'socket:\[(%s)\]'" % (pid, '|'.join(socket_inodes)))
if ret and ret.stdout.strip():
return True
return False
def stop(plugin_context, *args, **kwargs):
global stdio
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
servers = {}
stdio.start_loading('Stop obproxy')
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
if 'home_path' not in server_config:
stdio.verbose('%s home_path is empty', server)
continue
remote_pid_path = '%s/run/obproxy-%s-%s.pid' % (server_config["home_path"], server.ip, server_config["listen_port"])
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid:
if client.execute_command('ls /proc/%s' % remote_pid):
stdio.verbose('%s obproxy[pid:%s] stopping ...' % (server, remote_pid))
client.execute_command('kill -9 -%s' % remote_pid)
servers[server] = {
'client': client,
'listen_port': server_config['listen_port'],
'prometheus_listen_port': server_config['prometheus_listen_port'],
'pid': remote_pid,
'path': remote_pid_path
}
else:
stdio.verbose('%s obproxy is not running' % server)
count = 10
check = lambda client, pid, port: confirm_port(client, pid, port) if count < 5 else get_port_socket_inode(client, port)
time.sleep(1)
while count and servers:
tmp_servers = {}
for server in servers:
data = servers[server]
stdio.verbose('%s check whether the port is released' % server)
for key in ['prometheus_listen_port', 'listen_port']:
if data[key] and check(data['client'], data['pid'], data[key]):
tmp_servers[server] = data
break
data[key] = ''
else:
client.execute_command('rm -f %s' % data['path'])
stdio.verbose('%s obproxy is stopped', server)
servers = tmp_servers
count -= 1
if count and servers:
time.sleep(3)
if servers:
stdio.stop_loading('fail')
for server in servers:
stdio.warn('%s port not released', server)
else:
stdio.stop_loading('succeed')
plugin_context.return_true()
\ No newline at end of file
- src_path: ./home/admin/oceanbase/lib/libaio.so
target_path: libaio.so
- src_path: ./home/admin/oceanbase/lib/libaio.so.1
target_path: libaio.so.1
- src_path: ./home/admin/oceanbase/lib/libaio.so.1.0.1
target_path: libaio.so.1.0.1
- src_path: ./home/admin/oceanbase/lib/libmariadb.so
target_path: libmariadb.so
- src_path: ./home/admin/oceanbase/lib/libmariadb.so.3
target_path: libmariadb.so.3
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import time
def bootstrap(plugin_context, cursor, *args, **kwargs):
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
bootstrap = []
floor_servers = {}
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
zone = server_config['zone']
if zone in floor_servers:
floor_servers[zone].append('%s:%s' % (server.ip, server_config['rpc_port']))
else:
floor_servers[zone] = []
bootstrap.append('REGION "sys_region" ZONE "%s" SERVER "%s:%s"' % (server_config['zone'], server.ip, server_config['rpc_port']))
try:
sql = 'alter system bootstrap %s' % (','.join(bootstrap))
stdio.start_loading('Cluster bootstrap')
stdio.verbose('execute sql: %s' % sql)
cursor.execute(sql)
for zone in floor_servers:
for addr in floor_servers[zone]:
sql = 'alter system add server "%s" zone "%s"' % (addr, zone)
stdio.verbose('execute sql: %s' % sql)
cursor.execute(sql)
global_conf = cluster_config.get_global_conf()
if 'proxyro_password' in global_conf or 'obproxy' in plugin_context.components:
value = global_conf['proxyro_password'] if 'proxyro_password' in global_conf else ''
sql = 'create user "proxyro" IDENTIFIED BY "%s"' % value
stdio.verbose(sql)
cursor.execute(sql)
sql = 'grant select on oceanbase.* to proxyro IDENTIFIED BY "%s"' % value
stdio.verbose(sql)
cursor.execute(sql)
stdio.stop_loading('succeed')
plugin_context.return_true()
except:
stdio.exception('')
try:
cursor.execute('select * from oceanbase.__all_server')
servers = cursor.fetchall()
stdio.stop_loading('succeed')
plugin_context.return_true()
except:
stdio.stop_loading('fail')
stdio.exception('')
plugin_context.return_false()
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import sys
import time
if sys.version_info.major == 2:
import MySQLdb as mysql
else:
import pymysql as mysql
def _connect(ip, port):
if sys.version_info.major == 2:
db = mysql.connect(host=ip, user='root', port=port)
cursor = db.cursor(cursorclass=mysql.cursors.DictCursor)
else:
db = mysql.connect(host=ip, user='root', port=port, cursorclass=mysql.cursors.DictCursor)
cursor = db.cursor()
return db, cursor
def connect(plugin_context, target_server=None, *args, **kwargs):
count = 10
cluster_config = plugin_context.cluster_config
stdio = plugin_context.stdio
if target_server:
servers = [target_server]
server_config = cluster_config.get_server_conf(target_server)
stdio.start_loading('Connect observer(%s:%s)' % (target_server, server_config['mysql_port']))
else:
servers = cluster_config.servers
stdio.start_loading('Connect to observer')
while count:
count -= 1
for server in servers:
try:
server_config = cluster_config.get_server_conf(server)
db, cursor = _connect(server.ip, server_config['mysql_port'])
stdio.stop_loading('succeed')
return plugin_context.return_true(connect=db, cursor=cursor)
except:
pass
time.sleep(3)
stdio.stop_loading('fail')
plugin_context.return_false()
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def destroy(plugin_context, *args, **kwargs):
def clean(server, path):
client = clients[server]
ret = client.execute_command('rm -fr %s/* %s/.conf' % (path, path))
if not ret:
# print stderror
global_ret = False
stdio.warn('fail to clean %s:%s', (server, path))
else:
stdio.verbose('%s:%s cleaned' % (server, path))
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
global_ret = True
stdio.start_loading('observer work dir cleaning')
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
stdio.verbose('%s work path cleaning', server)
clean(server, server_config['home_path'])
if 'data_dir' in server_config:
clean(server, server_config['data_dir'])
if global_ret:
stdio.stop_loading('succeed')
plugin_context.return_true()
else:
stdio.stop_loading('fail')
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import sys
import time
from prettytable import PrettyTable
def display(plugin_context, cursor, *args, **kwargs):
count = 10
stdio = plugin_context.stdio
stdio.start_loading('Wait for observer init')
while count > 0:
try:
cursor.execute('select * from oceanbase.__all_server')
servers = cursor.fetchall()
if servers:
stdio.print_list(servers, ['ip', 'version', 'port', 'zone', 'status'],
lambda x: [x['svr_ip'], x['build_version'].split('_')[0], x['inner_port'], x['zone'], x['status']], title='observer')
stdio.stop_loading('succeed')
return plugin_context.return_true()
except Exception as e:
if e.args[0] != 1146:
raise e
count -= 1
time.sleep(3)
stdio.stop_loading('fail', 'observer need bootstarp')
plugin_context.return_false()
- src_path: ./home/admin/oceanbase/bin/observer
target_path: bin/observer
type: bin
mode: 755
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def init(plugin_context, *args, **kwargs):
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
global_ret = True
force = getattr(plugin_context.options, 'force', False)
stdio.verbose('option `force` is %s' % force)
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
home_path = server_config['home_path']
stdio.print('%s initializes cluster work home', server)
if force:
ret = client.execute_command('rm -fr %s/*' % home_path)
if not ret:
global_ret = False
stdio.error('failed to initialize %s home path: %s' % (server, ret.stderr))
continue
else:
if client.execute_command('mkdir -p %s' % home_path):
ret = client.execute_command('ls %s' % (home_path))
if not ret or ret.stdout.strip():
global_ret = False
stdio.error('fail to init %s home path: %s is not empty' % (server, home_path))
continue
else:
stdio.error('fail to init %s home path: create %s failed' % (server, home_path))
ret = client.execute_command('mkdir -p %s/{etc,admin,.conf,log}' % home_path)
if ret:
data_path = server_config['data_dir'] if 'data_dir' in server_config else '%s/store' % home_path
if force:
ret = client.execute_command('rm -fr %s/*' % data_path)
if not ret:
global_ret = False
stdio.error('fail to init %s data path: %s permission denied' % (server, ret.stderr))
continue
else:
if client.execute_command('mkdir -p %s' % data_path):
ret = client.execute_command('ls %s' % (data_path))
if not ret or ret.stdout.strip():
global_ret = False
stdio.error('fail to init %s data path: %s is not empty' % (server, data_path))
continue
else:
stdio.error('fail to init %s data path: create %s failed' % (server, data_path))
ret = client.execute_command('mkdir -p %s/{sstable,clog,ilog,slog}' % data_path)
if ret:
data_path != '%s/store' % home_path and client.execute_command('ln -sf %s %s/store' % (data_path, home_path))
else:
global_ret = False
stdio.error('failed to initialize %s date path', server)
else:
global_ret = False
stdio.error('fail to init %s home path: %s permission denied' % (server, ret.stderr))
global_ret and plugin_context.return_true()
- name: home_path
require: true
type: STRING
min_value: NULL
max_value: NULL
need_redeploy: true
description_en: the directory for the work data file
description_local: OceanBase工作目录
- name: cluster_id
require: true
type: INT
min_value: 1
max_value: 4294901759
need_restart: true
description_en: ID of the cluster
description_local: 本OceanBase集群ID
- name: data_dir
type: STRING
min_value: NULL
max_value: NULL
need_redeploy: true
description_en: the directory for the data file
description_local: 存储sstable等数据的目录
- name: devname
type: STRING
min_value: NULL
max_value: NULL
need_restart: true
description_en: name of network adapter
description_local: 服务进程绑定的网卡设备名
- name: rpc_port
require: true
type: INT
default: 2500
min_value: 1025
max_value: 65535
need_restart: true
description_en: the port number for RPC protocol.
description_local: 集群内部通信的端口号
- name: mysql_port
require: true
type: INT
default: 2880
min_value: 1025
max_value: 65535
need_restart: true
description_en: port number for mysql connection
description_local: SQL服务协议端口号
- name: zone
require: true
type: STRING
default: zone1
min_value: NULL
max_value: NULL
section: OBSERVER
need_redeploy: true
description_en: specifies the zone name
description_local: 节点所在的zone的名字。
- name: max_px_worker_count
require: false
type: INT
default: 64
min_value: 0
max_value: 65535
section: TENANT
need_restart: false
description_en: maximum parallel execution worker count can be used for all parallel requests.
description_local: SQL并行查询引擎使用的最大线程数
- name: enable_separate_sys_clog
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: separate system and user commit log. The default value is false.
description_local: 是否把系统事务日志与用户事务日志分开存储
- name: min_observer_version
require: false
type: STRING
default: 1.1.0
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the min observer version
description_local: 本集群最小的observer程序版本号
- name: sys_cpu_limit_trigger
require: false
type: INT
default: 80
min_value: 50
max_value: NULL
section: OBSERVER
need_restart: false
description_en: when the cpu usage percentage exceed the trigger, will limit the sys cpu usage
description_local: 当CPU利用率超过该阈值的时候,将暂停系统后台任务的执行
- name: memory_limit_percentage
require: false
type: INT
default: 80
min_value: 10
max_value: 90
section: OBSERVER
need_restart: false
description_en: memory limit percentage of the total physical memory
description_local: 系统总可用内存大小占总内存大小的百分比
- name: force_refresh_location_cache_threshold
require: false
type: INT
default: 100
min_value: 1
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: location cache refresh threshold which use sql method in one second.
description_local: 刷新位置缓存时每秒最多刷新次数,超过会被限流
- name: sys_bkgd_migration_retry_num
require: false
type: INT
default: 3
min_value: 3
max_value: 100
section: OBSERVER
need_restart: false
description_en: retry num limit during migration.
description_local: 副本迁移失败时最多重试次数
- name: partition_table_check_interval
require: false
type: TIME
default: 30m
min_value: 1m
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the time interval that observer remove replica which not exist in observer from partition table
description_local: 定期检查partition表一致性的时间间隔
- name: tableapi_transport_compress_func
require: false
type: STRING
default: none
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: compressor used for tableAPI query result.
description_local: tableAPI查询结果传输使用的压缩算法
- name: election_blacklist_interval
require: false
type: TIME
default: 1800s
min_value: 0s
max_value: 24h
section: TRANS
need_restart: false
description_en: If leader_revoke, this replica cannot be elected to leader in election_blacklist_interval
description_local: 主副本被废除后,有一段时间不允许再被选为主
- name: disk_io_thread_count
require: false
type: INT
default: 8
min_value: 2
max_value: 32
section: OBSERVER
need_restart: false
description_en: The number of io threads on each disk.
description_local: 磁盘IO线程数。必须为偶数。
- name: location_cache_refresh_min_interval
require: false
type: TIME
default: 100ms
min_value: 0s
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: the time interval in which no request for location cache renewal will be executed.
description_local: 位置缓存刷新请求的最小间隔,防止产生过多刷新请求造成系统压力过大
- name: trace_log_slow_query_watermark
type: TIME
default: 100ms
min_value: 1ms
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the threshold of execution time (in milliseconds) of a query beyond which it is considered to be a slow query.
description_local: 执行时间超过该阈值的查询会被认为是慢查询,慢查询的追踪日志会被打印到系统日志中
- name: max_string_print_length
require: false
type: INT
default: 500
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: truncate very long string when printing to log file
description_local: 打印系统日志时,单行日志最大长度
- name: row_compaction_update_limit
require: false
type: INT
default: 6
min_value: 1
max_value: 6400
section: TRANS
need_restart: false
description_en: maximum update count before trigger row compaction
description_local: 触发内存中行内数据合并的修改次数
- name: sys_bkgd_io_high_percentage
require: false
type: INT
default: 90
min_value: 1
max_value: 100
section: OBSERVER
need_restart: false
description_en: the high disk io percentage of sys io, sys io can use at most high percentage
description_local: 系统后台IO最高可以占用IO的百分比
- name: enable_rereplication
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: specifies whether the partition auto-replication is turned on.
description_local: 自动补副本开关
- name: rootservice_async_task_thread_count
require: false
type: INT
default: 4
min_value: 1
max_value: 10
section: ROOT_SERVICE
need_restart: false
description_en: maximum of threads allowed for executing asynchronous task at rootserver.
description_local: RootService内部异步任务使用的线程池大小
- name: major_compact_trigger
require: false
type: INT
default: 5
min_value: 0
max_value: 65535
section: TENANT
need_restart: false
description_en: major_compact_trigger alias to minor_freeze_times
description_local: 多少次小合并触发一次全局合并。值为0时,表示关闭小合并
- name: ssl_client_authentication
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: true
description_en: enable server supports SSL connection, takes effect only after server restart with all ca/cert/key file.
description_local: 是否开启SSL连接功能
- name: balancer_timeout_check_interval
require: false
type: TIME
default: 1m
min_value: 1s
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the time interval between the schedules of the task that checks whether the partition load balancing task has timed-out.
description_local: 检查负载均衡等后台任务是否超时的时间间隔
- name: datafile_size
require: false
type: CAPACITY
default: 0
min_value: 0M
max_value: NULL
section: SSTABLE
need_restart: false
description_en: size of the data file.
description_local: 数据文件大小。一般不要设置。
- name: clog_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: clog cache priority
description_local: 事务日志占用缓存的优先级
- name: merge_stat_sampling_ratio
require: false
type: INT
default: 100
min_value: 0
max_value: 100
section: OBSERVER
need_restart: false
description_en: column stats sampling ratio daily merge.
description_local: 合并时候数据列统计信息的采样率
- name: sql_audit_memory_limit
require: false
type: CAPACITY
default: 3G
min_value: 64M
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the maximum size of the memory used by SQL audit virtual table when the function is turned on. The upper limit is 3G, with default 10% of avaiable memory.
description_local: SQL审计数据可占用内存限制
- name: cache_wash_threshold
require: false
type: CAPACITY
default: 4GB
min_value: 0B
max_value: NULL
section: OBSERVER
need_restart: false
description_en: size of remaining memory at which cache eviction will be triggered.
description_local: 触发缓存清理的容量阈值
- name: row_purge_thread_count
require: false
type: INT
default: 4
min_value: 1
max_value: 64
section: TRANS
need_restart: false
description_en: maximum of threads allowed for executing row purge task.
description_local: 执行内存中行内数据合并的工作线程数
- name: user_iort_up_percentage
require: false
type: INT
default: 100
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: variable to control sys io, the percentage of use io rt can raise
description_local: 用户磁盘IO时延超过该阈值后,系统后台IO任务将被限流
- name: balance_blacklist_retry_interval
require: false
type: TIME
default: 30m
min_value: 0s
max_value: 180m
section: LOAD_BALANCE
need_restart: false
description_en: balance task in black list, wait a little time to add
description_local: 副本迁移等后台任务被放入黑名单后,多久可以重试
- name: high_priority_net_thread_count
require: false
type: INT
default: 0
min_value: 0
max_value: 100
section: OBSERVER
need_restart: true
description_en: the number of rpc I/O threads for high priority messages, 0 means set off
description_local: 高优先级网络线程数,值0表示关闭
- name: index_info_block_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: index info block cache priority
description_local: 块索引在缓存系统中的优先级
- name: max_kept_major_version_number
require: false
type: INT
default: 2
min_value: 1
max_value: 16
section: DAILY_MERGE
need_restart: false
description_en: the maximum number of kept major versions
description_local: 数据保留多少个冻结版本
- name: enable_sys_unit_standalone
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: specifies whether sys unit standalone deployment is turned on.
description_local: 系统租户UNIT是否独占节点
- name: freeze_trigger_percentage
require: false
type: INT
default: 70
min_value: 1
max_value: 99
section: TENANT
need_restart: false
description_en: the threshold of the size of the mem store when freeze will be triggered.
description_local: 触发全局冻结的租户使用内存阈值。另见enable_global_freeze_trigger。
- name: enable_auto_leader_switch
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: specifies whether partition leadership auto-switch is turned on.
description_local: 自动切主开关
- name: enable_major_freeze
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: specifies whether major_freeze function is turned on.
description_local: 自动全局冻结开关
- name: balancer_tolerance_percentage
require: false
type: INT
default: 10
min_value: 1
max_value: 99
section: LOAD_BALANCE
need_restart: false
description_en: specifies the tolerance (in percentage) of the unbalance of the disk space utilization among all units.
description_local: 租户内多个UNIT间磁盘不均衡程度的宽容度,在均值+-宽容度范围之内的不均衡不会触发执行均衡动作
- name: server_cpu_quota_min
require: false
type: DOUBLE
default: 2.5
min_value: 0
max_value: 16
section: TENANT
need_restart: true
description_en: the number of minimal vCPUs allocated to the server tenant(a special internal tenant that exists on every observer)
description_local: 系统可以使用的最小CPU配额,将会预留
- name: memory_reserved
require: false
type: CAPACITY
default: 500M
min_value: 10M
max_value: NULL
section: SSTABLE
need_restart: false
description_en: the size of the system memory reserved for emergency internal use.
description_local: 系统预留内存大小
- name: server_cpu_quota_max
require: false
type: DOUBLE
default: 5
min_value: 0
max_value: 16
section: TENANT
need_restart: true
description_en: the number of maximal vCPUs allocated to the server tenant
description_local: 系统可以使用的最大CPU配额
- name: index_clog_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: index clog cache priority
description_local: 事务日志索引在缓存系统中的优先级
- name: rootservice_ready_check_interval
require: false
type: TIME
default: 3s
min_value: 100000us
max_value: 1m
section: ROOT_SERVICE
need_restart: false
description_en: the interval between the schedule of the task that checks on the status of the ZONE during restarting.
description_local: RootService启动后等待和检查集群状态的时间间隔
- name: debug_sync_timeout
require: false
type: TIME
default: 0
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: Enable the debug sync facility and optionally specify a default wait timeout in micro seconds. A zero value keeps the facility disabled
description_local: 打开debug sync调试开关,并设置其超时时间;值为0时,则关闭。
- name: syslog_level
require: false
type: STRING
default: INFO
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies the current level of logging.
description_local: 系统日志级别
- name: all_cluster_list
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: a list of servers which access the same config_url
description_local: 已废除
- name: resource_hard_limit
require: false
type: INT
default: 100
min_value: 1
max_value: 10000
section: LOAD_BALANCE
need_restart: false
description_en: Used along with resource_soft_limit in unit allocation. If server utilization is less than resource_soft_limit, a policy of best fit will be used for unit allocation; otherwise, a least load policy will be employed. Ultimately,system utilization should not be large than resource_hard_limit.
description_local: CPU和内存等资源进行分配的时候,资源总量是实际数量乘以该百分比的值
- name: leak_mod_to_check
require: false
type: STRING
default: NONE
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the name of the module under memory leak checks
description_local: 内存泄露检查,用于内部调试目的
- name: balancer_task_timeout
require: false
type: TIME
default: 20m
min_value: 1s
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the time to execute the load-balancing task before it is terminated.
description_local: 负载均衡等后台任务的超时时间
- name: enable_upgrade_mode
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether upgrade mode is turned on. If turned on, daily merger and balancer will be disabled.
description_local: 升级模式开关。在升级模式中,会暂停部分系统后台功能。
- name: enable_unit_balance_resource_weight
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: specifies whether maual configed resource weight is turned on.
description_local: 负载均衡的时候,是否允许配置的资源权重生效
- name: multiblock_read_size
require: false
type: CAPACITY
default: 128K
min_value: 0K
max_value: 2M
section: SSTABLE
need_restart: false
description_en: multiple block batch read size in one read io request.
description_local: 读取数据时IO聚合大小
- name: switchover_process_thread_count
require: false
type: INT
default: 6
min_value: 1
max_value: 1000
section: ROOT_SERVICE
need_restart: false
description_en: maximum of threads allowed for executing switchover task at rootserver
description_local: 主备库切换相关线程池大小
- name: migration_disable_time
require: false
type: TIME
default: 3600s
min_value: 1s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the duration in which the observer stays in the block_migrate_in status, which means no partition is allowed to migrate into the server.
description_local: 因磁盘满等原因导致某个节点数据迁入失败时,暂停迁入时长
- name: tablet_size
require: false
type: CAPACITY
default: 128M
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: default tablet size, has to be a multiple of 2M
description_local: 分区内部并行处理(合并、查询等)时每个分片的大小
- name: balancer_emergency_percentage
require: false
type: INT
default: 80
min_value: 1
max_value: 100
section: LOAD_BALANCE
need_restart: false
description_en: Unit load balance is disabled while zone is merging.But for unit load above emergency percentage suituation, will always try migrate out partitions.
description_local: 当UNIT负载超过该阈值时,即使在合并期间也执行负载均衡
- name: dead_socket_detection_timeout
require: false
type: TIME
default: 10s
min_value: 0s
max_value: 2h
section: OBSERVER
need_restart: false
description_en: specify a tcp_user_timeout for RFC5482. A zero value makes the option disabled
description_local: 失效socket检测超时时间
- name: server_check_interval
require: false
type: TIME
default: 30s
min_value: 1s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the time interval between schedules of a task that examines the __all_server table.
description_local: server表一致性检查的时间间隔
- name: lease_time
require: false
type: TIME
default: 10s
min_value: 1s
max_value: 5m
section: ROOT_SERVICE
need_restart: false
description_en: Lease for current heartbeat. If the root server does not received any heartbeat from an observer in lease_time seconds, that observer is considered to be offline.
description_local: RootService与其他服务节点之间的租约时长。一般请勿修改。
- name: rootservice_async_task_queue_size
require: false
type: INT
default: 16384
min_value: 8
max_value: 131072
section: ROOT_SERVICE
need_restart: false
description_en: the size of the queue for all asynchronous tasks at rootserver.
description_local: RootService内部异步任务队列的容量
- name: location_refresh_thread_count
require: false
type: INT
default: 4
min_value: 2
max_value: 64
section: LOCATION_CACHE
need_restart: false
description_en: the number of threads that fetch the partition location information from the root service.
description_local: 用于位置缓存刷新的线程数
- name: minor_compact_trigger
require: false
type: INT
default: 2
min_value: 0
max_value: 16
section: TENANT
need_restart: false
description_en: minor_compact_trigger
description_local: 触发小合并的迷你合并次数
- name: merger_completion_percentage
require: false
type: INT
default: 100
min_value: 5
max_value: 100
section: DAILY_MERGE
need_restart: false
description_en: the merged partition count percentage and merged data size percentage when MERGE is completed
description_local: 合并完成副本数达到该百分比,则认为本轮合并完成调度
- name: major_freeze_duty_time
type: MOMENT
default: Disable
min_value: 00:00
max_value: 23:59
section: DAILY_MERGE
need_restart: false
description_en: the start time of system daily merge procedure.
description_local: 每日定时冻结和合并的触发时刻
- name: ignore_replay_checksum_error
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: TRANS
need_restart: false
description_en: specifies whether error raised from the memtable replay checksum validation can be ignored.
description_local: 是否忽略回放事务日志时发生的校验和错误
- name: log_restore_concurrency
require: false
type: INT
default: 10
min_value: 1
max_value: NULL
section: OBSERVER
need_restart: false
description_en: concurrency for log restoring
description_local: 恢复日志的并发度
- name: user_block_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: user block cache priority
description_local: 数据块缓存在缓存系统中的优先级
- name: syslog_io_bandwidth_limit
require: false
type: CAPACITY
default: 30MB
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: Syslog IO bandwidth limitation, exceeding syslog would be truncated. Use 0 to disable ERROR log.
description_local: 系统日志所能占用的磁盘IO带宽上限,超过带宽的系统日志将被丢弃
- name: workers_per_cpu_quota
require: false
type: INT
default: 10
min_value: 2
max_value: 20
section: TENANT
need_restart: false
description_en: the ratio(integer) between the number of system allocated workers vs the maximum number of threads that can be scheduled concurrently.
description_local: 每个CPU配额分配多少个工作线程
- name: enable_record_trace_id
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether record app trace id is turned on.
description_local: 是否记录应用端设置的追踪ID
- name: config_additional_dir
require: false
type: STRING_LIST
default: etc2;etc3
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: additional directories of configure file
description_local: 本地存储配置文件的多个目录,为了冗余存储多份配置文件
- name: enable_syslog_recycle
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether log file recycling is turned on
description_local: 是否自动回收系统日志
- name: meta_table_read_write_mode
require: false
type: INT
default: 2
min_value: 0
max_value: 2
section: OBSERVER
need_restart: false
description_en: meta table read write mode. 0 means read write __all_meta_table; 1 means read write __all_meta_table while write __all_tenant_meta_table; 2 means read write __all_tenant_meta_table
description_local: 控制meta表的读写模式,本配置用于OB升级过程内部实现,外部请勿使用
- name: clog_disk_usage_limit_percentage
require: false
type: INT
default: 95
min_value: 80
max_value: 100
section: TRANS
need_restart: false
description_en: maximum of clog disk usage percentage before stop submitting or receiving logs.
description_local: 事务日志的磁盘IO最大可用的磁盘利用率
- name: px_task_size
require: false
type: CAPACITY
default: 2M
min_value: 2M
max_value: NULL
section: OBSERVER
need_restart: false
description_en: min task access size of px task
description_local: SQL并行查询引擎每个任务处理的数据量大小
- name: index_cache_priority
require: false
type: INT
default: 10
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: index cache priority
description_local: 索引在缓存系统中的优先级
- name: replica_safe_remove_time
require: false
type: TIME
default: 2h
min_value: 1m
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the time interval that replica not existed has not been modified beyond which a replica is considered can be safely removed
description_local: 已删除副本可以被清理的安全保留时间
- name: builtin_db_data_verify_cycle
require: false
type: INT
default: 20
min_value: 0
max_value: 360
section: OBSERVER
need_restart: false
description_en: check cycle of db data.
description_local: 数据坏块自检周期,单位为天。值0表示不检查。
- name: enable_merge_by_turn
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: DAILY_MERGE
need_restart: false
description_en: specifies whether merge task can be performed on different zones in a alternating fashion.
description_local: 轮转合并策略开关
- name: system_cpu_quota
require: false
type: DOUBLE
default: 10
min_value: 0
max_value: 16
section: TENANT
need_restart: false
description_en: the number of vCPUs allocated to the server tenant
description_local: 系统后台任务可使用CPU配额
- name: tenant_groups
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: specifies tenant groups for server balancer.
description_local: 设置负载均衡策略中使用的租户组,详见负载均衡文档说明
- name: enable_sys_table_ddl
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: specifies whether a system table is allowed be to created manually.
description_local: 是否允许新建和修改系统表。主要在系统升级过程中使用。
- name: backup_region
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: user suggest backup region
description_local: 用户显式指定建议哪个地域执行备份
- name: merge_thread_count
require: false
type: INT
default: 0
min_value: 0
max_value: 64
section: OBSERVER
need_restart: false
description_en: worker thread num for compaction
description_local: 用于合并的线程数
- name: force_refresh_location_cache_interval
require: false
type: TIME
default: 2h
min_value: 1s
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: the max interval for refresh location cache
description_local: 刷新位置缓存的最大间隔
- name: net_thread_count
require: false
type: INT
default: 12
min_value: 1
max_value: 100
section: OBSERVER
need_restart: true
description_en: the number of rpc/mysql I/O threads for Libeasy.
description_local: 网络IO线程数
- name: max_stale_time_for_weak_consistency
require: false
type: TIME
default: 5s
min_value: 5s
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the max data stale time that observer can provide service when its parent is invalid.
description_local: 弱一致性读允许读到多旧的数据
- name: minor_freeze_times
require: false
type: INT
default: 5
min_value: 0
max_value: 65535
section: TENANT
need_restart: false
description_en: specifies how many minor freeze should be triggered between two major freeze.
description_local: 多少次小合并触发一次全局合并。值为0时,表示关闭小合并。与major_compact_trigger等同。
- name: backup_log_archive_option
require: false
type: STRING
default: OPTIONAL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: backup log archive option, support MANDATORY/OPTIONAL, COMPRESSION
description_local: 日志备份的参数
- name: trx_try_wait_lock_timeout
require: false
type: TIME
default: 0ms
min_value: NULL
max_value: NULL
section: TRANS
need_restart: false
description_en: the time to wait on row lock acquiring before retry.
description_local: 语句执行过程上行锁的等待时长
- name: backup_concurrency
require: false
type: INT
default: 0
min_value: 0
max_value: 100
section: OBSERVER
need_restart: false
description_en: backup concurrency limit.
description_local: observer备份基线的并发度
- name: balancer_log_interval
require: false
type: TIME
default: 1m
min_value: 1s
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the time interval between logging the load-balancing tasks statistics.
description_local: 负载均衡等后台任务线程打印统计日志的间隔时间
- name: restore_concurrency
require: false
type: INT
default: 0
min_value: 0
max_value: 512
section: OBSERVER
need_restart: false
description_en: the current work thread num of restore macro block.
description_local: 从备份恢复租户数据时最大并发度
- name: micro_block_merge_verify_level
require: false
type: INT
default: 2
min_value: 0
max_value: 3
section: OBSERVER
need_restart: false
description_en: specify what kind of verification should be done when merging micro block. 0, no verification will be done; 1, verify encoding algorithm, encoded micro block will be read to ensure data is correct; 2, verify encoding and compression algorithm, besides encoding verification, compressed block will be decompressed to ensure data is correct; 3, verify encoding, compression algorithm and lost write protect
description_local: 控制合并时宏块的校验级别
- name: backup_net_limit
require: false
type: CAPACITY
default: 0M
min_value: 0M
max_value: NULL
section: OBSERVER
need_restart: false
description_en: backup net limit for whole cluster
description_local: 集群备份的总带宽限制
- name: bf_cache_miss_count_threshold
require: false
type: INT
default: 100
min_value: 0
max_value: NULL
section: CACHE
need_restart: false
description_en: bf cache miss count threshold, 0 means disable bf cache
description_local: 用于控制bloomfilter cache的触发次数,当宏块未命中次数达到这个值时,给创建bloomfilter缓存。0表示关闭。
- name: backup_dest
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: backup dest
description_local: 备份的目标地址
- name: weak_read_version_refresh_interval
require: false
type: TIME
default: 50ms
min_value: 0ms
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the time interval to refresh cluster weak read version
description_local: 弱一致性读版本号的刷新周期,影响弱一致性读数据的延时;值为0时,表示不再刷新弱一致性读版本号,不提供单调读功能
- name: large_query_worker_percentage
require: false
type: DOUBLE
default: 30
min_value: 0
max_value: 100
section: TENANT
need_restart: false
description_en: the percentage of the workers reserved to serve large query request.
description_local: 预留给大查询处理的工作线程百分比
- name: enable_pg
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: open partition group
description_local: 分区组功能开关
- name: clog_transport_compress_all
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: TRANS
need_restart: false
description_en: If this option is set to true, use compression for clog transport. The default is false(no compression)
description_local: 事务日志传输时是否压缩
- name: server_temporary_offline_time
require: false
type: TIME
default: 60s
min_value: 15s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the time interval between two heartbeats beyond which a server is considered to be temporarily offline.
description_local: 节点心跳中断多久后认为其被“临时下线”
- name: flush_log_at_trx_commit
require: false
type: INT
default: 1
min_value: 0
max_value: 2
section: TRANS
need_restart: false
description_en: 0 means commit transactions without waiting clog write to buffer cache, 1 means commit transactions after clog flush to disk, 2 means commit transactions after clog write to buffer cache
description_local: 事务提交时写事务日志策略。0表示不等待日志写入缓冲区,1表示等待日志写入磁盘,2表示等待日志写入缓冲区而不等落盘
- name: zone_merge_timeout
require: false
type: TIME
default: 3h
min_value: 1s
max_value: NULL
section: DAILY_MERGE
need_restart: false
description_en: the time for each zone to finish its merge process before the root service no longer consider it as in MERGE state
description_local: 单个Zone的合并超时时间
- name: global_major_freeze_residual_memory
require: false
type: INT
default: 40
min_value: 1
max_value: 99
section: OBSERVER
need_restart: false
description_en: post global major freeze when observer memsotre free memory(plus memory hold by frozen memstore and blockcache) reach this limit. limit calc by memory_limit * (1 - system_memory_percentage/100) * global_major_freeze_residual_memory/100
description_local: 当剩余内存小于这个百分比时,触发全局冻结
- name: enable_sql_audit
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether SQL audit is turned on.
description_local: SQL审计功能开关
- name: server_data_copy_out_concurrency
require: false
type: INT
default: 2
min_value: 1
max_value: 1000
section: LOAD_BALANCE
need_restart: false
description_en: the maximum number of partitions allowed to migrate from the server.
description_local: 单个节点迁出数据最大并发数
- name: merger_switch_leader_duration_time
require: false
type: TIME
default: 3m
min_value: 0s
max_value: 30m
section: ROOT_SERVICE
need_restart: false
description_en: switch leader duration time for daily merge.
description_local: 合并时,批量切主的时间间隔
- name: enable_record_trace_log
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether to always record the trace log.
description_local: 是否记录追踪日志
- name: sys_bkgd_migration_change_member_list_timeout
require: false
type: TIME
default: 1h
min_value: 0s
max_value: 24h
section: OBSERVER
need_restart: false
description_en: the timeout for migration change member list retry.
description_local: 副本迁移时变更Paxos成员组操作的超时时间
- name: rootservice_list
require: false
type: STRING_LIST
default: NULL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: a list of servers which contains rootservice
description_local: RootService及其副本所在的机器列表
- name: enable_smooth_leader_switch
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: to be removed
description_local: 平滑切主特性开关
- name: enable_syslog_wf
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether any log message with a log level higher than WARN would be printed into a separate file with a suffix of wf
description_local: 是否把WARN以上级别的系统日志打印到一个单独的日志文件中
- name: global_index_build_single_replica_timeout
require: false
type: TIME
default: 48h
min_value: 1h
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: build single replica task timeout when rootservice schedule to build global index.
description_local: 建全局索引时,每个副本构建的超时时间
- name: memstore_limit_percentage
require: false
type: INT
default: 50
min_value: 1
max_value: 99
section: TENANT
need_restart: false
description_en: used in calculating the value of MEMSTORE_LIMIT
description_local: 租户用于memstore的内存占其总可用内存的百分比
- name: election_cpu_quota
require: false
type: DOUBLE
default: 3
min_value: 0
max_value: 10
section: TENANT
need_restart: false
description_en: the number of vCPUs allocated to the election tenant
description_local: 给副本选举相关的后台工作分配的CPU配额
- name: minor_deferred_gc_time
require: false
type: TIME
default: 0s
min_value: 0s
max_value: 24h
section: OBSERVER
need_restart: false
description_en: sstable deferred gc time after merge
description_local: 合并之后SSTable延迟回收间隔
- name: data_disk_usage_limit_percentage
require: false
type: INT
default: 90
min_value: 50
max_value: 100
section: OBSERVER
need_restart: false
description_en: the safe use percentage of data disk
description_local: 数据文件最大可以写入的百分比,超过这个阈值后,禁止数据迁入
- name: enable_perf_event
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether to enable perf event feature.
description_local: perf event调试特性开关
- name: obconfig_url
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: URL for OBConfig service
description_local: OBConfig服务的URL地址
- name: rebuild_replica_data_lag_threshold
require: false
type: CAPACITY
default: 0
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: size of clog files that a replica lag behind leader to trigger rebuild
description_local: 备副本的事务日志和主副本差距超过该阈值时,触发副本重建
- name: system_memory
type: CAPACITY
default: 16G
min_value: 0M
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the memory reserved for internal use which cannot be allocated to any outer-tenant, and should be determined to guarantee every server functions normally.
description_local: 系统预留内存大小,不能分配给普通租户使用
- name: cpu_quota_concurrency
require: false
type: DOUBLE
default: 4
min_value: 1
max_value: 10
section: TENANT
need_restart: false
description_en: max allowed concurrency for 1 CPU quota
description_local: 租户每个CPU配额允许的最大并发数
- name: auto_leader_switch_interval
require: false
type: TIME
default: 30s
min_value: 1s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: time interval for periodic leadership reorganization taking place.
description_local: 自动切主后台线程工作间隔时间
- name: zone_merge_order
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: DAILY_MERGE
need_restart: false
description_en: the order of zone start merge in daily merge
description_local: 轮转合并的时候,多个Zone的顺序。不指定的时候,由系统自动决定。
- name: log_archive_checkpoint_interval
require: false
type: TIME
default: 120s
min_value: 5s
max_value: 1h
section: OBSERVER
need_restart: false
description_en: control interval of generating log archive checkpoint for cold partition
description_local: 单个observer物理备份中推进冷分区备份位点的时间间隔
- name: backup_recovery_window
require: false
type: TIME
default: 0
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: backup expired day limit, 0 means not expired
description_local: 恢复窗口大小
- name: default_row_format
require: false
type: STRING
default: dynamic
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: default row format in mysql mode
description_local: MySQL模式下,建表时使用的默认行格式
- name: merger_warm_up_duration_time
require: false
type: TIME
default: 0s
min_value: 0s
max_value: 60m
section: ROOT_SERVICE
need_restart: false
description_en: warm up duration time for daily merge.
description_local: 合并时,新版基线数据预热时间
- name: token_reserved_percentage
require: false
type: DOUBLE
default: 30
min_value: 0
max_value: 100
section: TENANT
need_restart: false
description_en: specifies the amount of token increase allocated to a tenant based on his consumption from the last round (without exceeding his upper limit).
description_local: 控制租户CPU调度中每次预留多少比例的空闲token数给租户
- name: stack_size
require: false
type: CAPACITY
default: 1M
min_value: 512K
max_value: 20M
section: OBSERVER
need_restart: true
description_en: the size of routine execution stack
description_local: 程序函数调用栈大小
- name: balancer_idle_time
require: false
type: TIME
default: 5m
min_value: 10s
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the time interval between the schedules of the partition load-balancing task.
description_local: 负载均衡等后台任务线程空闲时的唤醒间隔时间
- name: memory_limit
require: false
type: CAPACITY
default: 0
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the size of the memory reserved for internal use(for testing purpose)
description_local: 可用总内存大小。用于调试,不要设置。
- name: __min_full_resource_pool_memory
require: true
type: INT
default: 268435456
min_value: 536870912
max_value:
need_restart: false
description_en: the minimum memory limit of the resource pool
description_local: 资源池最小内存限制
- name: clog_transport_compress_func
require: false
type: STRING
default: lz4_1.0
min_value: NULL
max_value: NULL
section: TRANS
need_restart: false
description_en: compressor used for clog transport
description_local: 事务日志内部传输时使用的压缩算法
- name: virtual_table_location_cache_expire_time
require: false
type: TIME
default: 8s
min_value: 1s
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: expiration time for virtual table location info in partiton location cache.
description_local: 虚拟表的位置信息缓存过期时间
- name: ssl_external_kms_info
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: when using the external key management center for ssl, this parameter will store some key management information
description_local: 配置ssl使用的主密钥管理服务
- name: enable_sql_operator_dump
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether sql operators (sort/hash join/material/window function/interm result/...) allowed to write to disk
description_local: 是否允许SQL处理过程的中间结果写入磁盘以释放内存
- name: enable_rich_error_msg
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether add ip:port, time and trace id to user error message.
description_local: 是否在客户端消息中添加服务器地址、时间、追踪ID等调试信息
- name: enable_election_group
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether election group is turned on.
description_local: 是否打开选举组策略
- name: log_archive_concurrency
require: false
type: INT
default: 0
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: concurrency for log_archive_sender and log_archive_spiter
description_local: 日志归档并发度
- name: server_balance_disk_tolerance_percent
require: false
type: INT
default: 1
min_value: 1
max_value: 100
section: LOAD_BALANCE
need_restart: false
description_en: specifies the tolerance (in percentage) of the unbalance of the disk space utilization among all servers. The average disk space utilization is calculated by dividing the total space by the number of servers. server balancer will start a rebalancing task when the deviation between the average usage and some server load is greater than this tolerance
description_local: 节点负载均衡策略中,磁盘资源不均衡的容忍度
- name: location_cache_priority
require: false
type: INT
default: 1000
min_value: 1
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: priority of location cache among all system caching service.
description_local: 位置缓存在缓存中的优先级
- name: user_tab_col_stat_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: user tab col stat cache priority
description_local: 统计数据缓存在缓存系统中的优先级
- name: recyclebin_object_expire_time
require: false
type: TIME
default: 0s
min_value: 0s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: recyclebin object expire time, default 0 that means auto purge recyclebin off.
description_local: 回收站对象的有效期,超过有效的对象将被回收;0表示关闭回收功能;
- name: gts_refresh_interval
require: false
type: TIME
default: 100us
min_value: 10us
max_value: 1s
section: TRANS
need_restart: false
description_en: gts source refresh ts value in this interval
description_local: 获取刷新全局时间戳服务的间隔
- name: minor_warm_up_duration_time
require: false
type: TIME
default: 30s
min_value: 0s
max_value: 60m
section: OBSERVER
need_restart: false
description_en: warm up duration time for minor freeze.
description_local: 小合并产生新转储文件的预热时间
- name: sys_bkgd_io_low_percentage
require: false
type: INT
default: 0
min_value: 0
max_value: 100
section: OBSERVER
need_restart: false
description_en: the low disk io percentage of sys io, sys io can use at least low percentage,when the value is 0, it will automatically set low limit for SATA and SSD disk to guarantee at least 128MB disk bandwidth
description_local: 系统后台IO最少可以占用IO的百分比。当值为0时,系统自动根据环境配置。
- name: migrate_concurrency
require: false
type: INT
default: 10
min_value: 0
max_value: 64
section: OBSERVER
need_restart: false
description_en: set concurrency of migration, set upper limit to migrate_concurrency and set lower limit to migrate_concurrency/2
description_local: 控制内部数据迁移的并发度
- name: cpu_reserved
require: false
type: INT
default: 2
min_value: 0
max_value: 15
section: OBSERVER
need_restart: false
description_en: the number of CPUs reserved for system usage.
description_local: 预留给系统使用的CPU数,其余将被OceanBase独占使用
- name: redundancy_level
require: false
type: STRING
default: NORMAL
min_value: NULL
max_value: NULL
section: SSTABLE
need_restart: false
description_en: EXTERNAL, use extrernal redundancy; NORMAL, tolerate one disk failure, HIGH tolerate two disk failure if disk count is enough
description_local: OB内置本地磁盘RAID特性。暂勿使用
- name: server_data_copy_in_concurrency
require: false
type: INT
default: 2
min_value: 1
max_value: 1000
section: LOAD_BALANCE
need_restart: false
description_en: the maximum number of partitions allowed to migrate to the server.
description_local: 单个节点迁入数据最大并发数
- name: rootservice_memory_limit
require: false
type: CAPACITY
default: 2G
min_value: 2G
max_value: NULL
section: OBSERVER
need_restart: false
description_en: max memory size which can be used by rs tenant
description_local: RootService最大内存限制
- name: plan_cache_low_watermark
require: false
type: CAPACITY
default: 1500M
min_value: NULL
max_value: NULL
section: TENANT
need_restart: false
description_en: memory usage at which plan cache eviction will be stopped.
description_local: 执行计划缓存占用内存低于该阈值时将停止淘汰
- name: partition_table_scan_batch_count
require: false
type: INT
default: 999
min_value: 1
max_value: 65536
section: ROOT_SERVICE
need_restart: false
description_en: the number of partition replication info that will be read by each request on the partition-related system tables during procedures such as load-balancing, daily merge, election and etc.
description_local: 批量读取partition表时的批次大小
- name: trx_2pc_retry_interval
require: false
type: TIME
default: 100ms
min_value: 1ms
max_value: 5000ms
section: TRANS
need_restart: false
description_en: the time interval between the retries in case of failure during a transactions two-phase commit phase
description_local: 两阶段提交失败时候自动重试的间隔
- name: global_write_halt_residual_memory
require: false
type: INT
default: 30
min_value: 1
max_value: 99
section: OBSERVER
need_restart: false
description_en: disable write to memstore when observer memstore free memory(plus memory hold by blockcache) lower than this limit,
description_local: 当全局剩余内存小于这个百分比时,暂停普通租户写入(sys租户不受影响)
- name: cpu_count
require: false
type: INT
default: 0
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the number of CPUs in the system. If this parameter is set to zero, the number will be set according to sysconf; otherwise, this parameter is used.
description_local: 系统CPU总数,如果设置为0,将自动检测
- name: auto_delete_expired_backup
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: control if auto delete expired backup
description_local: 自动删除过期的备份
- name: max_syslog_file_count
require: false
type: INT
default: 0
min_value: 0
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies the maximum number of the log files that can co-exist before the log file recycling kicks in. Each log file can occupy at most 256MB disk space. When this value is set to 0, no log file will be removed.
description_local: 系统日志自动回收复用时,最多保留多少个。值0表示不自动清理。
- name: appname
require: false
type: STRING
default: obcluster
min_value: NULL
max_value: NULL
section: OBSERVER
need_redeploy: true
description_en: Name of the cluster
description_local: 本OceanBase集群名
- name: use_large_pages
require: false
type: STRING
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: true
description_en: used to manage the databases use of large pages, values are false, true, only
description_local: 控制内存大页的行为,"true"表示在操作系统开启内存大页并且有空闲大页时,数据库总是申请内存大页,否则申请普通内存页, "false"表示数据库不使用大页, "only"表示数据库总是分配大页
- name: dtl_buffer_size
require: false
type: CAPACITY
default: 64K
min_value: 4K
max_value: 2M
section: OBSERVER
need_restart: false
description_en: buffer size for DTL
description_local: SQL数据传输模块使用的缓存大小
- name: server_balance_critical_disk_waterlevel
require: false
type: INT
default: 80
min_value: 0
max_value: 100
section: LOAD_BALANCE
need_restart: false
description_en: disk water level to determine server balance strategy
description_local: 磁盘水位线超过该阈值时,负载均衡策略将倾向于优先考虑磁盘均衡
- name: ignore_replica_checksum_error
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: TRANS
need_restart: false
description_en: specifies whether error raised from the partition checksum validation can be ignored.
description_local: 是否忽略多副本间校验和检查发生的错误
- name: location_fetch_concurrency
require: false
type: INT
default: 20
min_value: 1
max_value: 1000
section: LOCATION_CACHE
need_restart: false
description_en: the maximum number of the tasks which fetch the partition location information concurrently.
description_local: 位置缓存信息刷新的最大并发度
- name: location_cache_expire_time
require: false
type: TIME
default: 600s
min_value: 1s
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: the expiration time for a partition location info in partition location cache.
description_local: 位置缓存中缓存项的过期时长
- name: enable_async_syslog
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether use async syslog
description_local: 是否启用系统日志异步写
- name: clog_sync_time_warn_threshold
require: false
type: TIME
default: 100ms
min_value: 1ms
max_value: 10000ms
section: TRANS
need_restart: false
description_en: the time given to the commit log synchronization between a leader and its followers before a warning message is printed in the log file.
description_local: 事务日志同步耗时告警阈值,同步耗时超过该值产生WARN日志
- name: location_cache_cpu_quota
require: false
type: DOUBLE
default: 5
min_value: 0
max_value: 10
section: TENANT
need_restart: false
description_en: the number of vCPUs allocated for the requests regarding location info of the core tables.
description_local: 位置缓存模块使用的CPU配额
- name: bf_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: bloomfilter cache priority
description_local: 布隆过滤器占用缓存的优先级
- name: merger_check_interval
require: false
type: TIME
default: 10m
min_value: 10s
max_value: 60m
section: DAILY_MERGE
need_restart: false
description_en: the time interval between the schedules of the task that checks on the progress of MERGE for each zone.
description_local: 合并状态检查线程的调度间隔
- name: zone_merge_concurrency
require: false
type: INT
default: 0
min_value: 0
max_value: NULL
section: DAILY_MERGE
need_restart: false
description_en: the maximum number of zones which are allowed to be in the MERGE status concurrently.
description_local: 最多多少个Zone可以并发开始合并。当值为0时,由系统根据部署情况自动选择最佳并发度
- name: enable_rootservice_standalone
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: specifies whether the SYS tenant is allowed to occupy an observer exclusively, thus running in the standalone mode.
description_local: 是否让系统租户和RootService独占observer节点
- name: minor_merge_concurrency
require: false
type: INT
default: 0
min_value: 0
max_value: 64
section: OBSERVER
need_restart: false
description_en: the current work thread num of minor merge.
description_local: 小合并时的并发线程数
- name: px_workers_per_cpu_quota
require: false
type: INT
default: 10
min_value: 0
max_value: 20
section: TENANT
need_restart: false
description_en: the ratio between the number of system allocated px workers vs the maximum number of threads that can be scheduled concurrently.
description_local: 并行执行工作线程数的比例
- name: large_query_threshold
require: false
type: TIME
default: 100ms
min_value: 1ms
max_value: NULL
section: TENANT
need_restart: false
description_en: threshold for execution time beyond which a request may be paused and rescheduled as large request
description_local: 一个查询执行时间超过该阈值会被判断为大查询,执行大查询调度策略
- name: sys_bkgd_net_percentage
require: false
type: INT
default: 60
min_value: 0
max_value: 100
section: OBSERVER
need_restart: false
description_en: the net percentage of sys background net.
description_local: 后台系统任务可占用网络带宽百分比
- name: fuse_row_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: fuse row cache priority
description_local: 融合行缓存在缓存系统中的优先级
- name: rpc_timeout
require: false
type: TIME
default: 2s
min_value: NULL
max_value: NULL
section: RPC
need_restart: false
description_en: the time during which a RPC request is permitted to execute before it is terminated
description_local: 集群内部请求的超时时间
- name: multiblock_read_gap_size
require: false
type: CAPACITY
default: 0K
min_value: 0K
max_value: 2M
section: SSTABLE
need_restart: false
description_en: max gap size in one read io request, gap means blocks that hit in block cache
description_local: 一次IO聚合读取时从块缓存中读取的最大大小
- name: tenant_task_queue_size
require: false
type: INT
default: 65536
min_value: 1024
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the size of the task queue for each tenant.
description_local: 每个租户的请求队列大小
- name: clog_disk_utilization_threshold
require: false
type: INT
default: 80
min_value: 10
max_value: 99
section: TRANS
need_restart: false
description_en: clog disk utilization threshold before reuse clog files, should be less than clog_disk_usage_limit_percentage.
description_local: Clog磁盘空间复用水位
- name: resource_soft_limit
require: false
type: INT
default: 50
min_value: 1
max_value: 10000
section: LOAD_BALANCE
need_restart: false
description_en: Used along with resource_hard_limit in unit allocation. If server utilization is less than resource_soft_limit, a policy of best fit will be used for unit allocation; otherwise, a least loadpolicy will be employed. Ultimately,system utilization should not be large than resource_hard_limit.
description_local: 当所有节点的资源水位低于该阈值时,不执行负载均衡
- name: plan_cache_evict_interval
require: false
type: TIME
default: 1s
min_value: 0s
max_value: NULL
section: TENANT
need_restart: false
description_en: time interval for periodic plan cache eviction.
description_local: 执行计划缓存的淘汰间隔
- name: server_balance_cpu_mem_tolerance_percent
require: false
type: INT
default: 5
min_value: 1
max_value: 100
section: LOAD_BALANCE
need_restart: false
description_en: specifies the tolerance (in percentage) of the unbalance of the cpu/memory utilization among all servers. The average cpu/memory utilization is calculated by dividing the total cpu/memory by the number of servers. server balancer will start a rebalancing task when the deviation between the average usage and some server load is greater than this tolerance
description_local: 节点负载均衡策略中,CPU和内存资源不均衡的容忍度
- name: autoinc_cache_refresh_interval
require: false
type: TIME
default: 3600s
min_value: 100ms
max_value: NULL
section: OBSERVER
need_restart: false
description_en: auto-increment service cache refresh sync_value in this interval
description_local: 自动刷新自增列值的时间间隔
- name: all_server_list
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: LOCATION_CACHE
need_restart: false
description_en: all server addr in cluster
description_local: 集群中所有机器的列表,不建议人工修改
- name: enable_global_freeze_trigger
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: specifies whether to trigger major freeze when global active memstore used reach freeze_trigger_percentage
description_local: 自动触发全局冻结开关。如果打开,当数据内存占用超过freeze_trigger_percentage时,自动触发全局冻结和合并。
- name: enable_rebalance
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: specifies whether the partition load-balancing is turned on.
description_local: 自动负载均衡开关
- name: internal_sql_execute_timeout
require: false
type: TIME
default: 30s
min_value: 1000us
max_value: 10m
section: OBSERVER
need_restart: false
description_en: the number of microseconds an internal DML request is permitted to execute before it is terminated.
description_local: 系统内部SQL请求的超时时间
- name: user_row_cache_priority
require: false
type: INT
default: 1
min_value: 1
max_value: NULL
section: CACHE
need_restart: false
description_en: user row cache priority
description_local: 基线数据行缓存在缓存系统中的优先级
- name: server_permanent_offline_time
require: false
type: TIME
default: 3600s
min_value: 20s
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: the time interval between any two heartbeats beyond which a server is considered to be permanently offline.
description_local: 节点心跳中断多久后认为其被“永久下线”,“永久下线”的节点上的数据副本需要被自动补足
- name: schema_history_expire_time
require: false
type: TIME
default: 7d
min_value: 1m
max_value: 30d
section: OBSERVER
need_restart: false
description_en: the hour of expire time for schema history
description_local: 元数据历史数据过期时间
- name: get_leader_candidate_rpc_timeout
require: false
type: TIME
default: 9s
min_value: 2s
max_value: 180s
section: ROOT_SERVICE
need_restart: false
description_en: the time during a get leader candidate rpc request is permitted to execute before it is terminated.
description_local: 自动切主策略获取切主候选者的内部请求超时时间
- name: datafile_disk_percentage
require: false
type: INT
default: 90
min_value: 5
max_value: 99
section: SSTABLE
need_restart: false
description_en: the percentage of disk space used by the data files.
description_local: data_dir所在磁盘将被OceanBase系统初始化用于存储数据,本配置项表示占用该磁盘总空间的百分比
- name: default_compress_func
require: false
type: STRING
default: zstd_1.0
min_value: NULL
max_value: NULL
section: OBSERVER
need_restart: false
description_en: default compress function name for create new table
description_local: MySQL模式下,建表时使用的默认压缩算法
- name: enable_manual_merge
require: false
type: BOOL
default: false
min_value: NULL
max_value: NULL
section: DAILY_MERGE
need_restart: false
description_en: specifies whether manual MERGE is turned on
description_local: 手工合并开关
- name: memory_chunk_cache_size
require: false
type: CAPACITY
default: 0M
min_value: 0M
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the maximum size of memory cached by memory chunk cache.
description_local: 内存分配器缓存的内存块容量。值为0的时候表示系统自适应。
- name: ob_event_history_recycle_interval
require: false
type: TIME
default: 7d
min_value: 1d
max_value: 180d
section: ROOT_SERVICE
need_restart: false
description_en: the time to recycle event history.
description_local: OB事件表中事件条目的保存期限
- name: enable_ddl
require: false
type: BOOL
default: true
min_value: NULL
max_value: NULL
section: ROOT_SERVICE
need_restart: false
description_en: specifies whether DDL operation is turned on.
description_local: 是否允许执行DDL
- name: unit_balance_resource_weight
require: false
type: STRING
default: NULL
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the percentage variation for any tenants resource weight. The default value is empty. All weight must adds up to 100 if set
description_local: UNIT均衡策略中使用的资源权重,一般不需要手工配置。当打开enable_unit_balance_resource_weight时本配置才生效。
- name: balance_blacklist_failure_threshold
require: false
type: INT
default: 5
min_value: 0
max_value: 1000
section: LOAD_BALANCE
need_restart: false
description_en: a balance task failed count to be putted into blacklist
description_local: 副本迁移等后台任务连续失败超过该阈值后,将被放入黑名单
- name: system_trace_level
require: false
type: INT
default: 1
min_value: 0
max_value: 2
section: OBSERVER
need_restart: false
description_en: system trace log level, 0:none, 1:standard, 2:debug.
description_local: 系统追踪日志的日志打印级别
- name: data_copy_concurrency
require: false
type: INT
default: 20
min_value: 1
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: the maximum number of the data replication tasks.
description_local: 系统中并发执行的数据迁移复制任务的最大并发数
- name: wait_leader_batch_count
require: false
type: INT
default: 1024
min_value: 128
max_value: 5000
section: ROOT_SERVICE
need_restart: false
description_en: leader batch count everytime leader coordinator wait.
description_local: RootService发送切主命令的批次大小
- name: trx_force_kill_threshold
require: false
type: TIME
default: 100ms
min_value: 1ms
max_value: 10s
section: TRANS
need_restart: false
description_en: the time given to the transaction to execute when major freeze or switch leader before it will be killed.
description_local: 因冻结或切主需要杀事务时,最长等待时间
- name: trace_log_sampling_interval
require: false
type: TIME
default: 10ms
min_value: 0ms
max_value: NULL
section: OBSERVER
need_restart: false
description_en: the time interval for periodically printing log info in trace log. When force_trace_log is set to FALSE, for each time interval specifies by sampling_trace_log_interval, logging info regarding ‘slow query’ and ‘white list’ will be printed out.
description_local: 追踪日志的采样间隔,当force_trace_log关闭的时候生效
- name: proxyro_password
require: false
type: STRING
default: ''
min_value: NULL
max_value: NULL
section: LOAD_BALANCE
need_restart: false
description_en: password of observer proxyro user
description_local: proxyro用户的密码
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def reload(plugin_context, cursor, new_cluster_config, *args, **kwargs):
stdio = plugin_context.stdio
cluster_config = plugin_context.cluster_config
servers = cluster_config.servers
cluster_server = {}
change_conf = {}
global_change_conf = {}
global_ret = True
for server in servers:
change_conf[server] = {}
stdio.verbose('get %s old configuration' % (server))
config = cluster_config.get_server_conf_with_default(server)
stdio.verbose('get %s new configuration' % (server))
new_config = new_cluster_config.get_server_conf_with_default(server)
stdio.verbose('get %s cluster address' % (server))
cluster_server[server] = '%s:%s' % (server.ip, config['rpc_port'])
stdio.verbose('compare configuration of %s' % (server))
for key in new_config:
if key not in config or config[key] != new_config[key]:
change_conf[server][key] = new_config[key]
if key not in global_change_conf:
global_change_conf[key] = 1
else:
global_change_conf[key] += 1
servers_num = len(servers)
stdio.verbose('apply new configuration')
for key in global_change_conf:
sql = ''
try:
if key == 'proxyro_password':
if global_change_conf[key] != servers_num:
stdio.warn('Invalid: proxyro_password is not a single server configuration item')
continue
value = change_conf[server][key]
sql = 'alter user "proxyro" IDENTIFIED BY "%s"' % value
stdio.verbose('execute sql: %s' % sql)
cursor.execute(sql)
continue
if global_change_conf[key] == servers_num:
sql = 'alter system set %s = %%s' % key
value = change_conf[server][key]
stdio.verbose('execute sql: %s' % (sql % value))
cursor.execute(sql, [value])
cluster_config.update_global_conf(key, value, False)
continue
for server in servers:
if key not in change_conf[server]:
continue
sql = 'alter system set %s = %%s server=%%s' % key
value = change_conf[server][key]
stdio.verbose('execute sql: %s' % (sql % (value, server)))
cursor.execute(sql, [value, server])
cluster_config.update_server_conf(server,key, value, False)
except:
global_ret = False
stdio.exception('execute sql exception: %s' % sql)
cursor.execute('alter system reload server')
cursor.execute('alter system reload zone')
cursor.execute('alter system reload unit')
return plugin_context.return_true() if global_ret else None
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import json
import time
import requests
from copy import deepcopy
def config_url(ocp_config_server, appname, cid):
cfg_url = '%s&Action=ObRootServiceInfo&ObCluster=%s' % (ocp_config_server, appname)
proxy_cfg_url = '%s&Action=GetObProxyConfig&ObRegionGroup=%s' % (ocp_config_server, appname)
# Command that clears the URL content for the cluster
cleanup_config_url_content = '%s&Action=DeleteObRootServiceInfoByClusterName&ClusterName=%s' % (ocp_config_server, appname)
# Command that register the cluster information to the Config URL
register_to_config_url = '%s&Action=ObRootServiceRegister&ObCluster=%s&ObClusterId=%s' % (ocp_config_server, appname, cid)
return cfg_url, cleanup_config_url_content, register_to_config_url
def init_config_server(ocp_config_server, appname, cid, force_delete, stdio):
def post(url):
stdio.verbose('post %s' % url)
response = requests.post(url)
if response.status_code != 200:
raise Exception('%s status code %s' % (url, response.status_code))
return json.loads(response.text)['Code']
cfg_url, cleanup_config_url_content, register_to_config_url = config_url(ocp_config_server, appname, cid)
ret = post(register_to_config_url)
if ret != 200:
if not force_delete:
raise Exception('%s may have been registered in %s' % (appname, ocp_config_server))
ret = post(cleanup_config_url_content)
if ret != 200 :
raise Exception('failed to clean up the config url content, return code %s' % ret)
if post(register_to_config_url) != 200:
return False
return cfg_url
def start(plugin_context, local_home_path, repository_dir, *args, **kwargs):
cluster_config = plugin_context.cluster_config
options = plugin_context.options
clients = plugin_context.clients
stdio = plugin_context.stdio
clusters_cmd = {}
need_bootstrap = True
bin_path = os.path.join(repository_dir, 'bin/observer')
root_servers = {}
global_config = cluster_config.get_global_conf()
appname = global_config['appname'] if 'appname' in global_config else None
cluster_id = global_config['cluster_id'] if 'cluster_id' in global_config else None
obconfig_url = global_config['obconfig_url'] if 'obconfig_url' in global_config else None
cfg_url = ''
if obconfig_url:
if not appname or not cluster_id:
stdio.error('need appname and cluster_id')
return
try:
cfg_url = init_config_server(obconfig_url, appname, cluster_id, getattr(options, 'force_delete', False), stdio)
if not cfg_url:
stdio.error('failed to register cluster. %s may have been registered in %s.' % (appname, obconfig_url))
return
except:
stdio.exception('failed to register cluster')
return
stdio.start_loading('Start observer')
for server in cluster_config.servers:
config = cluster_config.get_server_conf(server)
zone = config['zone']
if zone not in root_servers:
root_servers[zone] = '%s:%s:%s' % (server.ip, config['rpc_port'], config['mysql_port'])
rs_list_opt = '-r \'%s\'' % ';'.join([root_servers[zone] for zone in root_servers])
for server in cluster_config.servers:
client = clients[server]
remote_home_path = client.execute_command('echo $HOME/.obd').stdout.strip()
remote_bin_path = bin_path.replace(local_home_path, remote_home_path)
server_config = cluster_config.get_server_conf(server)
req_check = ['home_path', 'cluster_id']
for key in req_check:
if key not in server_config:
stdio.stop_loading('fail')
stdio.print('%s %s is empty', server, key)
return plugin_context.return_false()
home_path = server_config['home_path']
if 'data_dir' not in server_config:
server_config['data_dir'] = '%s/store' % home_path
if client.execute_command('ls %s/clog' % server_config['data_dir']).stdout.strip():
need_bootstrap = False
remote_pid_path = '%s/run/observer.pid' % home_path
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid:
if client.execute_command('ls /proc/%s' % remote_pid):
continue
stdio.verbose('%s start command construction' % server)
not_opt_str = {
'zone': '-z',
'mysql_port': '-p',
'rpc_port': '-P',
'nodaemon': '-N',
'appname': '-n',
'cluster_id': '-c',
'data_dir': '-d',
'devname': '-i',
'syslog_level': '-l',
'ipv6': '-6',
'mode': '-m',
'scn': '-f'
}
get_value = lambda key: "'%s'" % server_config[key] if isinstance(server_config[key], str) else server_config[key]
opt_str = []
for key in server_config:
if key not in ['home_path', 'obconfig_url', 'proxyro_password'] and key not in not_opt_str:
value = get_value(key)
opt_str.append('%s=%s' % (key, value))
cmd = []
if cfg_url:
opt_str.append('obconfig_url=\'%s\'' % cfg_url)
else:
cmd.append(rs_list_opt)
cmd.append('-o %s' % ','.join(opt_str))
for key in not_opt_str:
if key in server_config:
value = get_value(key)
cmd.append('%s %s' % (not_opt_str[key], value))
clusters_cmd[server] = 'cd %s; %s %s' % (home_path, remote_bin_path, ' '.join(cmd))
for server in clusters_cmd:
client = clients[server]
stdio.verbose('starting %s observer', server)
ret = client.execute_command(clusters_cmd[server])
if not ret:
stdio.stop_loading('fail')
stdio.error('failed to start %s observer: %s' % (server, ret.stderr))
return
stdio.stop_loading('succeed')
stdio.start_loading('observer program health check')
time.sleep(3)
failed = []
for server in cluster_config.servers:
client = clients[server]
server_config = cluster_config.get_server_conf(server)
home_path = server_config['home_path']
remote_pid_path = '%s/run/observer.pid' % home_path
stdio.verbose('%s program health check' % server)
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid and client.execute_command('ls /proc/%s' % remote_pid):
stdio.verbose('%s observer[pid: %s] started', server, remote_pid)
else:
failed.append('failed to start %s observer' % server)
if failed:
stdio.stop_loading('fail')
for msg in failed:
stdio.warn(msg)
return plugin_context.return_false()
else:
stdio.stop_loading('succeed')
return plugin_context.return_true(need_bootstrap=need_bootstrap)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import re
stdio = None
success = True
def parse_size(size):
_bytes = 0
if not isinstance(size, str) or size.isdigit():
_bytes = int(size)
else:
units = {"B": 1, "K": 1<<10, "M": 1<<20, "G": 1<<30, "T": 1<<40}
match = re.match(r'([1-9][0-9]*)([B,K,M,G,T])', size)
_bytes = int(match.group(1)) * units[match.group(2)]
return _bytes
def get_port_socket_inode(client, port):
port = hex(port)[2:].zfill(4).upper()
cmd = "cat /proc/net/{tcp,udp} | awk -F' ' '{print $2,$10}' | grep '00000000:%s' | awk -F' ' '{print $2}' | uniq" % port
res = client.execute_command(cmd)
if not res or not res.stdout.strip():
return False
stdio.verbose(res.stdout)
return res.stdout.strip().split('\n')
def start_check(plugin_context, alert_lv='error', *args, **kwargs):
def alert(*arg, **kwargs):
global success
success = False
alert_f(*arg, **kwargs)
global stdio
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
alert_f = getattr(stdio, alert_lv)
servers_clients = {}
servers_port = {}
servers_memory = {}
servers_disk = {}
for server in cluster_config.servers:
ip = server.ip
client = clients[server]
servers_clients[ip] = client
if ip not in servers_port:
servers_disk[ip] = {}
servers_port[ip] = {}
servers_memory[ip] = {'num': 0, 'percentage': 0}
memory = servers_memory[ip]
ports = servers_port[ip]
disk = servers_disk[ip]
server_config = cluster_config.get_server_conf_with_default(server)
stdio.verbose('%s port check' % server)
for key in ['mysql_port', 'rpc_port']:
port = int(server_config[key])
if port in ports:
alert('%s: %s port is used for %s\'s %s' % (server, port, ports[port]['server'], ports[port]['key']))
continue
ports[port] = {
'server': server,
'key': key
}
if get_port_socket_inode(client, port):
alert('%s:%s port is already used' % (ip, port))
if 'memory_limit' in server_config:
memory['num'] += parse_size(server_config['memory_limit'])
elif 'memory_limit_percentage' in server_config:
memory['percentage'] += int(parse_size(server_config['memory_limit_percentage']))
else:
memory['percentage'] += 80
data_path = server_config['data_dir'] if 'data_dir' in server_config else server_config['home_path']
if data_path not in disk:
disk[data_path] = 0
if 'datafile_disk_percentage' in server_config:
disk[data_path] += int(server_config['datafile_disk_percentage'])
else:
disk[data_path] += 90
for ip in servers_clients:
client = servers_clients[ip]
ret = client.execute_command('cat /proc/sys/fs/aio-max-nr')
if not ret or not ret.stdout.strip().isdigit():
alert('(%s) failed to get fs.aio-max-nr' % ip)
elif int(ret.stdout) < 1048576:
alert('(%s) fs.aio-max-nr must not be less than 1048576 (Current value: %s)' % (ip, ret.stdout.strip()))
ret = client.execute_command('ulimit -n')
if not ret or not ret.stdout.strip().isdigit():
alert('(%s) failed to get open files number' % ip)
elif int(ret.stdout) < 655350:
alert('(%s) open files number must not be less than 655350 (Current value: %s)' % (ip, ret.stdout.strip()))
# memory
if servers_memory[ip]['percentage'] > 100:
alert('(%s) not enough memory' % ip)
else:
ret = client.execute_command("free -b | grep Mem | awk -F' ' '{print $2, $4}'")
if ret:
total_memory, free_memory = ret.stdout.split(' ')
total_memory = int(total_memory)
free_memory = int(free_memory)
total_use = servers_memory[ip]['percentage'] * total_memory / 100 + servers_memory[ip]['num']
if total_use > free_memory:
alert('(%s) not enough memory' % ip)
# disk
disk = {'/': 0}
ret = client.execute_command('df -h')
if ret:
for v, p in re.findall('(\d+)%\s+(.+)', ret.stdout):
disk[p] = int(v)
for path in servers_disk[ip]:
kp = '/'
for p in disk:
if p in path:
if len(p) > len(kp):
kp = p
disk[kp] += servers_disk[ip][path]
if disk[kp] > 100:
alert('(%s) %s not enough disk space' % (ip, kp))
if success:
plugin_context.return_true()
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
def status(plugin_context, *args, **kwargs):
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
cluster_status = {}
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
cluster_status[server] = 0
if 'home_path' not in server_config:
stdio.print('%s home_path is empty', server)
continue
remote_pid_path = '%s/run/observer.pid' % server_config['home_path']
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid and client.execute_command('ls /proc/%s' % remote_pid):
cluster_status[server] = 1
return plugin_context.return_true(cluster_status=cluster_status)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import json
import time
import requests
def config_url(ocp_config_server, appname, cid):
cfg_url = '%s&Action=ObRootServiceInfo&ObCluster=%s' % (ocp_config_server, appname)
proxy_cfg_url = '%s&Action=GetObProxyConfig&ObRegionGroup=%s' % (ocp_config_server, appname)
# 清除集群URL内容命令
cleanup_config_url_content = '%s&Action=DeleteObRootServiceInfoByClusterName&ClusterName=%s' % (ocp_config_server, appname)
# 注册集群信息到Config URL命令
register_to_config_url = '%s&Action=ObRootServiceRegister&ObCluster=%s&ObClusterId=%s' % (ocp_config_server, appname, cid)
return cfg_url, cleanup_config_url_content, register_to_config_url
def get_port_socket_inode(client, port):
port = hex(port)[2:].zfill(4).upper()
cmd = "cat /proc/net/{tcp,udp} | awk -F' ' '{print $2,$10}' | grep '00000000:%s' | awk -F' ' '{print $2}' | uniq" % port
res = client.execute_command(cmd)
if not res or not res.stdout.strip():
return False
return res.stdout.strip().split('\n')
def confirm_port(client, pid, port):
socket_inodes = get_port_socket_inode(client, port)
if not socket_inodes:
return False
ret = client.execute_command("ls -l /proc/%s/fd/ |grep -E 'socket:\[(%s)\]'" % (pid, '|'.join(socket_inodes)))
if ret and ret.stdout.strip():
return True
return False
def stop(plugin_context, *args, **kwargs):
cluster_config = plugin_context.cluster_config
clients = plugin_context.clients
stdio = plugin_context.stdio
global_config = cluster_config.get_global_conf()
global_config = cluster_config.get_global_conf()
appname = global_config['appname'] if 'appname' in global_config else None
cluster_id = global_config['cluster_id'] if 'cluster_id' in global_config else None
obconfig_url = global_config['obconfig_url'] if 'obconfig_url' in global_config else None
stdio.start_loading('Stop observer')
if obconfig_url and appname and cluster_id:
try:
cfg_url, cleanup_config_url_content, register_to_config_url = config_url(obconfig_url, appname, cluster_id)
stdio.verbose('post %s' % cleanup_config_url_content)
response = requests.post(cleanup_config_url_content)
if response.status_code != 200:
raise Exception('%s status code %s' % (cleanup_config_url_content, response.status_code))
except:
stdio.stop_loading('fail')
stdio.exception('failed to clean up the configuration url content')
return
servers = {}
for server in cluster_config.servers:
server_config = cluster_config.get_server_conf(server)
client = clients[server]
if 'home_path' not in server_config:
stdio.verbose('%s home_path is empty', server)
continue
remote_pid_path = '%s/run/observer.pid' % server_config['home_path']
remote_pid = client.execute_command('cat %s' % remote_pid_path).stdout.strip()
if remote_pid and client.execute_command('ps uax | egrep " %s " | grep -v grep' % remote_pid):
stdio.verbose('%s observer[pid:%s] stopping ...' % (server, remote_pid))
client.execute_command('kill -9 -%s; rm -f %s' % (remote_pid, remote_pid_path))
servers[server] = {
'client': client,
'mysql_port': server_config['mysql_port'],
'rpc_port': server_config['rpc_port'],
'pid': remote_pid
}
else:
stdio.verbose('%s observer is not running ...' % server)
count = 10
check = lambda client, pid, port: confirm_port(client, pid, port) if count < 5 else get_port_socket_inode(client, port)
time.sleep(1)
while count and servers:
tmp_servers = {}
for server in servers:
data = servers[server]
stdio.verbose('%s check whether the port is released' % server)
for key in ['rpc_port', 'mysql_port']:
if data[key] and check(data['client'], data['pid'], data[key]):
tmp_servers[server] = data
break
data[key] = ''
else:
stdio.verbose('%s observer is stopped', server)
servers = tmp_servers
count -= 1
if count and servers:
time.sleep(3)
if servers:
stdio.stop_loading('fail')
for server in servers:
stdio.warn('%s port not released', server)
else:
stdio.stop_loading('succeed')
plugin_context.return_true()
#!/bin/bash
if [ -n "$BASH_VERSION" ]; then
complete -F _obd_complete_func obd
fi
function _obd_complete_func
{
local cur prev cmd obd_cmd cluster_cmd mirror_cmd test_cmd
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
obd_cmd="mirror cluster test"
cluster_cmd="start deploy redeploy restart reload destroy stop edit-config list display"
mirror_cmd="clone create list update"
test_cmd="mysqltest"
if [[ ${cur} == * ]] ; then
case "${prev}" in
obd);&
test);&
cluster);&
mirror)
cmd=$(eval echo \$"${prev}_cmd")
COMPREPLY=( $(compgen -W "${cmd}" -- ${cur}) )
;;
clone);&
-p|--path);&
-c|--config)
filename=${cur##*/}
dirname=${cur%*$filename}
res=`ls -a -p $dirname 2>/dev/null | sed "s#^#$dirname#"`
compopt -o nospace
COMPREPLY=( $(compgen -o filenames -W "${res}" -- ${cur}) )
;;
esac
return 0
fi
}
requests==2.24.0
rpmfile==1.0.8
paramiko==2.7.2
backports.lzma==0.0.14
MySQL-python
ruamel.yaml
subprocess32==3.5.4
prettytable==1.0.1
enum34==1.1.6
progressbar==2.5
halo==0.0.30
\ No newline at end of file
rpmfile==1.0.8
paramiko==2.7.2
requests==2.25.1
PyMySQL==1.0.2
ruamel.yaml
subprocess32==3.5.4
prettytable==2.1.0
progressbar==2.5
halo==0.0.31
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import getpass
import warnings
from copy import deepcopy
from subprocess32 import Popen, PIPE
# paramiko import cryptography 模块在python2下会报不支持警报
warnings.filterwarnings("ignore")
from paramiko import AuthenticationException, SFTPClient
from paramiko.client import SSHClient, AutoAddPolicy
from paramiko.ssh_exception import NoValidConnectionsError
class SshConfig(object):
def __init__(self, host, username='root', password=None, key_filename=None, port=22, timeout=30):
self.host = host
self.username = username
self.password = password
self.key_filename = key_filename
self.port = port
self.timeout = timeout
def __str__(self):
return '%s@%s' % (self.username ,self.host)
class SshReturn(object):
def __init__(self, code, stdout, stderr):
self.code = code
self.stdout = stdout
self.stderr = stderr
def __bool__(self):
return self.code == 0
def __nonzero__(self):
return self.__bool__()
class LocalClient(object):
@staticmethod
def execute_command(command, env=None, timeout=None, stdio=None):
stdio and getattr(stdio, 'verbose', print)('local execute: %s ' % command, end='')
try:
p = Popen(command, env=env, shell=True, stdout=PIPE, stderr=PIPE)
output, error = p.communicate(timeout=timeout)
code = p.returncode
output = output.decode(errors='replace')
error = error.decode(errors='replace')
verbose_msg = 'exited code %s' % code
if code:
verbose_msg += ', error output:\n%s' % error
stdio and getattr(stdio, 'verbose', print)(verbose_msg)
except Exception as e:
output = ''
error = str(e)
code = 255
verbose_msg = 'exited code 255, error output:\n%s' % error
stdio and getattr(stdio, 'verbose', print)(verbose_msg)
stdio and getattr(stdio, 'exception', print)('')
return SshReturn(code, output, error)
@staticmethod
def put_file(local_path, remote_path, stdio=None):
if LocalClient.execute_command('cp -f %s %s' % (local_path, remote_path), stdio=stdio):
return True
return False
@staticmethod
def put_dir(self, local_dir, remote_dir, stdio=None):
if LocalClient.execute_command('cp -fr %s %s' % (local_dir, remote_dir), stdio=stdio):
return True
return False
class SshClient(object):
def __init__(self, config, stdio=None):
self.config = config
self.stdio = stdio
self.sftp = None
self.is_connected = False
self.ssh_client = SSHClient()
self.env_str = ''
if self._is_local():
self.env = deepcopy(os.environ.copy())
else:
self.env = {'PATH': '/sbin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:'}
self._update_env()
def _update_env(self):
env = []
for key in self.env:
if self.env[key]:
env.append('export %s=%s$%s;' % (key, self.env[key], key))
self.env_str = ''.join(env)
def add_env(self, key, value, rewrite=False, stdio=None):
stdio = stdio if stdio else self.stdio
if key not in self.env or not self.env[key] or rewrite:
stdio and getattr(stdio, 'verbose', print)('%s@%s set env %s to \'%s\'' % (self.config.username, self.config.host, key, value))
self.env[key] = value
else:
stdio and getattr(stdio, 'verbose', print)('%s@%s append \'%s\' to %s' % (self.config.username, self.config.host, value, key))
self.env[key] += value
self._update_env()
def get_env(self, key):
return self.env[key] if key in self.env else None
def __str__(self):
return '%s@%s:%d' % (self.config.username, self.config.host, self.config.port)
def _is_local(self):
return self.config.host in ['127.0.0.1', 'localhost'] and self.config.username == getpass.getuser()
def _login(self, stdio=None):
if self.is_connected:
return True
stdio = stdio if stdio else self.stdio
try:
self.ssh_client.set_missing_host_key_policy(AutoAddPolicy())
self.ssh_client.connect(
self.config.host,
port=self.config.port,
username=self.config.username,
password=self.config.password,
key_filename=self.config.key_filename,
timeout=self.config.timeout
)
except AuthenticationException:
stdio and getattr(stdio, 'exception', print)('')
stdio and getattr(stdio, 'critical', print)('%s@%s username or password error' % (self.config.username, self.config.host))
return False
except NoValidConnectionsError:
stdio and getattr(stdio, 'exception', print)('')
stdio and getattr(stdio, 'critical', print)('%s@%s connect failed: time out' % (self.config.username, self.config.host))
return False
except Exception as e:
stdio and getattr(stdio, 'exception', print)('')
stdio and getattr(stdio, 'critical', print)('%s@%s connect failed: %s' % (self.config.username, self.config.host, e))
return False
self.is_connected = True
return True
def _open_sftp(self, stdio=None):
if self.sftp:
return True
if self._login(stdio):
SFTPClient.from_transport(self.ssh_client.get_transport())
self.sftp = self.ssh_client.open_sftp()
return True
return False
def connect(self, stdio=None):
if self._is_local():
return True
return self._login(stdio)
def reconnect(self, stdio=None):
self.close(stdio)
return self.connect(stdio)
def close(self, stdio=None):
if self._is_local():
return True
if self.is_connected:
self.ssh_client.close()
if self.sftp:
self.sftp = None
def __del__(self):
self.close()
def execute_command(self, command, stdio=None):
if self._is_local():
return LocalClient.execute_command(command, self.env, self.config.timeout, stdio)
if not self._login(stdio):
return SshReturn(255, '', 'connect failed')
stdio = stdio if stdio else self.stdio
verbose_msg = '%s execute: %s ' % (self.config, command)
stdio and getattr(stdio, 'verbose', print)(verbose_msg, end='')
command = '%s %s;echo -e "\n$?\c"' % (self.env_str, command.strip(';'))
stdin, stdout, stderr = self.ssh_client.exec_command(command)
output = stdout.read().decode(errors='replace')
error = stderr.read().decode(errors='replace')
idx = output.rindex('\n')
code = int(output[idx:])
verbose_msg = 'exited code %s' % code
if code:
verbose_msg += ', error output:\n%s' % error
stdio and getattr(stdio, 'verbose', print)(verbose_msg)
return SshReturn(code, output[:idx], error)
def put_file(self, local_path, remote_path, stdio=None):
stdio = stdio if stdio else self.stdio
if self._is_local():
return LocalClient.put_file(local_path, remote_path, stdio)
if not os.path.isfile(local_path):
stdio and getattr(stdio, 'critical', print)('%s is not file' % local_path)
return False
if not self._open_sftp(stdio):
return False
if self.execute_command('mkdir -p %s' % os.path.split(remote_path)[0], stdio):
return self.sftp.put(local_path, remote_path)
return False
def put_dir(self, local_dir, remote_dir, stdio=None):
stdio = stdio if stdio else self.stdio
if self._is_local():
return LocalClient.put_dir(local_dir, remote_dir, stdio)
if not self._open_sftp(stdio):
return False
if not self.execute_command('mkdir -p %s' % remote_dir, stdio):
return False
failed = []
failed_dirs = []
local_dir_path_len = len(local_dir)
for root, dirs, files in os.walk(local_dir):
for path in failed_dirs:
if root.find(path) == 0:
# 父目录已经在被标记为失败,该层可直接跳过
# break退出不执行else代码段
break
else:
for name in files:
local_path = os.path.join(root, name)
remote_path = os.path.join(remote_dir, root[local_dir_path_len:].lstrip('/'), name)
if not self.sftp.put(local_path, remote_path):
failed.append(remote_path)
for name in dirs:
local_path = os.path.join(root, name)
remote_path = os.path.join(remote_dir, root[local_dir_path_len:].lstrip('/'), name)
if not self.execute_command('mkdir -p %s' % remote_path, stdio):
failed_dirs.append(local_dir)
failed.append(remote_path)
for path in failed:
stdio and getattr(stdio, 'critical', print)('send %s to %s@%s failed' % (path, self.config.username, self.config.host))
return True
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import bz2
import sys
import stat
import gzip
import shutil
from ruamel.yaml import YAML
if sys.version_info.major == 2:
from backports import lzma
else:
import lzma
_WINDOWS = os.name == 'nt'
class DynamicLoading(object):
class Module(object):
def __init__(self, module):
self.module = module
self.count = 0
LIBS_PATH = {}
MODULES = {}
@staticmethod
def add_lib_path(lib):
if lib not in DynamicLoading.LIBS_PATH:
DynamicLoading.LIBS_PATH[lib] = 0
if DynamicLoading.LIBS_PATH[lib] == 0:
sys.path.insert(0, lib)
DynamicLoading.LIBS_PATH[lib] += 1
@staticmethod
def add_libs_path(libs):
for lib in libs:
DynamicLoading.add_lib_path(lib)
@staticmethod
def remove_lib_path(lib):
if lib not in DynamicLoading.LIBS_PATH:
return
if DynamicLoading.LIBS_PATH[lib] < 1:
return
try:
DynamicLoading.LIBS_PATH[lib] -= 1
if DynamicLoading.LIBS_PATH[lib] == 0:
idx = sys.path.index(lib)
del sys.path[idx]
except:
pass
@staticmethod
def remove_libs_path(libs):
for lib in libs:
DynamicLoading.remove_lib_path(lib)
@staticmethod
def import_module(name, stdio=None):
if name not in DynamicLoading.MODULES:
try:
stdio and getattr(stdio, 'verbose', print)('import %s' % name)
module = __import__(name)
DynamicLoading.MODULES[name] = DynamicLoading.Module(module)
except:
stdio and getattr(stdio, 'exception', print)('import %s failed' % name)
stdio and getattr(stdio, 'verbose', print)('sys.path: %s' % sys.path)
return None
DynamicLoading.MODULES[name].count += 1
stdio and getattr(stdio, 'verbose', print)('add %s ref count to %s' % (name, DynamicLoading.MODULES[name].count))
return DynamicLoading.MODULES[name].module
@staticmethod
def export_module(name, stdio=None):
if name not in DynamicLoading.MODULES:
return
if DynamicLoading.MODULES[name].count < 1:
return
try:
DynamicLoading.MODULES[name].count -= 1
stdio and getattr(stdio, 'verbose', print)('sub %s ref count to %s' % (name, DynamicLoading.MODULES[name].count))
if DynamicLoading.MODULES[name].count == 0:
stdio and getattr(stdio, 'verbose', print)('export %s' % name)
del sys.modules[name]
del DynamicLoading.MODULES[name]
except:
stdio and getattr(stdio, 'exception', print)('export %s failed' % name)
class ConfigUtil(object):
@staticmethod
def get_value_from_dict(conf, key, default=None, transform_func=None):
try:
# 不要使用 conf.get(key, default)来替换,这里还有类型转换的需求
value = conf[key]
return transform_func(value) if transform_func else value
except:
return default
class DirectoryUtil(object):
@staticmethod
def copy(src, dst, stdio=None):
if not os.path.isdir(src):
stdio and getattr(stdio, 'error', print)("cannot copy tree '%s': not a directory" % src)
return False
try:
names = os.listdir(src)
except:
stdio and getattr(stdio, 'exception', print)("error listing files in '%s':" % (src))
return False
if DirectoryUtil.mkdir(dst, stdio):
return False
ret = True
links = []
for n in names:
src_name = os.path.join(src, n)
dst_name = os.path.join(dst, n)
if os.path.islink(src_name):
link_dest = os.readlink(src_name)
links.append((link_dest, dst_name))
elif os.path.isdir(src_name):
ret = DirectoryUtil.copy(src_name, dst_name, stdio) and ret
else:
FileUtil.copy(src_name, dst_name)
for link_dest, dst_name in links:
DirectoryUtil.rm(dst_name, stdio)
os.symlink(link_dest, dst_name)
return ret
@staticmethod
def mkdir(path, mode=0o755, stdio=None):
try:
os.makedirs(path, mode=mode)
return True
except OSError as e:
if e.errno == 17:
return True
elif e.errno == 20:
stdio and getattr(stdio, 'error', print)('%s is not a directory', path)
else:
stdio and getattr(stdio, 'error', print)('failed to create directory %s', path)
stdio and getattr(stdio, 'exception', print)('')
except:
stdio and getattr(stdio, 'exception', print)('')
stdio and getattr(stdio, 'error', print)('failed to create directory %s', path)
return False
@staticmethod
def rm(path, stdio=None):
try:
if os.path.exists(path):
if os.path.islink(path):
os.remove(path)
else:
shutil.rmtree(path)
return True
except Exception as e:
stdio and getattr(stdio, 'exception', print)('')
stdio and getattr(stdio, 'error', print)('failed to remove %s', path)
return False
class FileUtil(object):
COPY_BUFSIZE = 1024 * 1024 if _WINDOWS else 64 * 1024
@staticmethod
def copy_fileobj(fsrc, fdst):
fsrc_read = fsrc.read
fdst_write = fdst.write
while True:
buf = fsrc_read(FileUtil.COPY_BUFSIZE)
if not buf:
break
fdst_write(buf)
@staticmethod
def copy(src, dst, stdio=None):
if os.path.exists(src) and os.path.exists(dst) and os.path.samefile(src, dst):
info = "`%s` and `%s` are the same file" % (src, dst)
if stdio:
getattr(stdio, 'error', print)(info)
return False
else:
raise IOError(info)
for fn in [src, dst]:
try:
st = os.stat(fn)
except OSError:
pass
else:
if stat.S_ISFIFO(st.st_mode):
info = "`%s` is a named pipe" % fn
if stdio:
getattr(stdio, 'error', print)(info)
return False
else:
raise IOError(info)
try:
if os.path.islink(src):
os.symlink(os.readlink(src), dst)
return True
with FileUtil.open(src, 'rb') as fsrc:
with FileUtil.open(dst, 'wb') as fdst:
FileUtil.copy_fileobj(fsrc, fdst)
return True
except Exception as e:
if stdio:
getattr(stdio, 'exception', print)('copy error')
else:
raise e
return False
@staticmethod
def open(path, _type='r', stdio=None):
if os.path.exists(path):
if os.path.isfile(path):
return open(path, _type)
info = '%s is not file' % path
if stdio:
getattr(stdio, 'error', print)(info)
return None
else:
raise IOError(info)
dir_path, file_name = os.path.split(path)
if not dir_path or DirectoryUtil.mkdir(dir_path, stdio=stdio):
return open(path, _type)
info = '%s is not file' % path
if stdio:
getattr(stdio, 'error', print)(info)
return None
else:
raise IOError(info)
@staticmethod
def unzip(source, ztype=None, stdio=None):
if not ztype:
ztype = source.split('.')[-1]
try:
if ztype == 'bz2':
s_fn = bz2.BZ2File(source, 'r')
elif ztype == 'xz':
s_fn = lzma.LZMAFile(source, 'r')
elif ztype == 'gz':
s_fn = gzip.GzipFile(source, 'r')
else:
s_fn = open(unzip, 'r')
return s_fn
except:
stdio and getattr(stdio, 'exception', print)('failed to unzip %s' % source)
return None
@staticmethod
def rm(path, stdio=None):
if not os.path.exists(path):
return True
try:
os.remove(path)
return True
except:
stdio and getattr(stdio, 'exception', print)('failed to remove %s' % path)
return False
@staticmethod
def move(src, dst, stdio=None):
return shutil.move(src, dst)
class YamlLoader(YAML):
def __init__(self, stdio=None, typ=None, pure=False, output=None, plug_ins=None):
super(YamlLoader, self).__init__(typ=typ, pure=pure, output=output, plug_ins=plug_ins)
self.stdio = stdio
def load(self, stream):
try:
return super(YamlLoader, self).load(stream)
except Exception as e:
if getattr(self.stdio, 'exception', False):
self.stdio.exception('Parsing error:\n%s' % e)
raise e
def dump(self, data, stream=None, transform=None):
try:
return super(YamlLoader, self).dump(data, stream=stream, transform=transform)
except Exception as e:
if getattr(self.stdio, 'exception', False):
self.stdio.exception('dump error:\n%s' % e)
raise e
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册