1. 02 11月, 2020 1 次提交
  2. 31 10月, 2020 2 次提交
  3. 30 10月, 2020 6 次提交
    • C
      Fix source greenplum_path.sh error with set -u (#11085) · 1f429744
      Chen Mulong 提交于
      The error was introduced by dc96f667.
      If `set -u` was called before sourcing greenplum_path.sh with bash, an
      error `ZSH_VERSION: unbound variable` would be reported.
      To solve the issue, use shell syntax `{:-}` which will output an empty
      value if the variable doesn't exist.
      
      Tested with zsh, bash and dash.
      1f429744
    • (
      Reset wrote_xlog in pg_conn to avoid keeping old value. (#11077) · 777b51cd
      (Jerome)Junfeng Yang 提交于
      On QD, it tracks whether QE wrote_xlog in the libpq connection.
      
      The logic is, if QE writes xlog, it'll send a libpq msg to QD. But the
      msg is sent in ReadyForQuery. So, before QE execute this function, the
      QE may already send back results to QD. Then when QD process this
      message, it does not read the new wrote_xlog value. This makes the
      connection still contains the previous dispatch wrote_xlog value,
      which will affect whether choosing one phase commit.
      
      The issue only happens when the QE flush the libpq msg before the
      ReadyForQuery function, hard to find a case to cover it.
      I found the issue when I playing the code to send some information from
      QE to QD. And it breaks the gangsize test which shows the commit info.
      777b51cd
    • C
      Make greenplum-path.sh compatible with more shells (#11043) · dc96f667
      Chen Mulong 提交于
      The generated greenplum_path.sh env file contained bash specific syntax
      previously, so it errors out if the user's shell is zsh.
      
      zsh doesn't have BASH_SOURCE. "${(%):-%x}" is the similar replacement
      for zsh.
      Also try to support other shells with some command combinations.
      Tested with bash/zsh/dash.
      dc96f667
    • D
      Docs - update interconnect proxy discussion to cover hostname support (#11027) · d7bfe6ee
      David Yozie 提交于
      * Docs - update interconnect proxy discussion to cover hostname support
      
      * Change gp_interconnect_type -> gp_interconnect_proxy_addresses in note
      d7bfe6ee
    • L
      docs - update some troubleshooting info (#11064) · 151fa706
      Lisa Owen 提交于
      151fa706
    • D
      gpinitsystem -I should respect master dbid != 1 · 00ae3013
      dh-cloud 提交于
      Looking at GP documents, there is no indication that master dbid
      must be 1. However, when CREATE_QD_DB, gpinitsystem always writes
      "gp_dbid=1" into file `internal.auto.conf` even if we specify:
      
      ```
      mdw~5432~/data/master/gpseg-1~2~-1
       OR
      mdw~5432~/data/master/gpseg-1~0~-1
      ```
      
      But catalog gp_segment_configuration can have the correct master
      dbid value (2 or 0), the mismatch causes gpinitsystem hang.
      Users can run into such problem for their first time to use
      gpinitsystem -I.
      
      Here we test dbid 0, because PostmasterMain() will simply check
      dbid >= 0 (non-utility mode), it says:
      
      > This value must be >= 0, or >= -1 in utility mode
      
      It seems 0 is a valid value.
      
      Changes:
      
      - use specified master dbid field when CREATE_QD_DB.
      - remove unused macros MASTER_DBID, InvalidDbid in C sources.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      00ae3013
  4. 29 10月, 2020 2 次提交
    • L
      docs - add info about postgres_fdw module (#11075) · 6693192c
      Lisa Owen 提交于
      6693192c
    • D
      Skip fts probe for fts process · 3cf72f6c
      dh-cloud 提交于
      If cdbcomponent_getCdbComponents() caught an error threw by
      function getCdbComponents, FtsNotifyProber would be called.
      But if it happened inside fts process, ftp process would hang.
      
      Skip fts probe for fts process, after that, under the same
      situation, fts process would exit and then be restarted by
      postmaster.
      3cf72f6c
  5. 28 10月, 2020 6 次提交
    • (
      Collect pgstat from QE to enable auto-ANALYZE on partition leaf table. (#10988) · 259cb9e7
      (Jerome)Junfeng Yang 提交于
      Collect tuple relead pgstat table info from segments. Then the
      auto-analyze could consider partition tables now. Since before, we don't
      have accurate pgstat for partition leaf table. This kind of info is counted
      through the access method on segments and we used to collect them by the
      estate es_processed count on QD. So if insert into the root partition
      table, we can not know how many tuples inserted into a leaf, autovacuum
      never trigger auto-ANALYZE for leaf table.
      
      The idea is, for writer QE, report current nest level xact tables pgstat
      to QD through libpq at the end of a query statement. For a single
      statement, it wouldn't operate too many tables, so the effort is really
      small.
      And on QD, retrieve and combine these tables' stat from the dispatch
      result and add to current nest level xact pgstats.
      Now we can remove the old pgstat collection code on QD.
      
      The pgstat for a table could be view by query `pg_stat_all_tables_inernal`.
      And now, except for the scan related counters, other counters should be accurate.
      On master, the table's pgstat of scan related counters are not gathered
      from segments yet, this requires extra work. The current implementation
      is already enough for supporting auto-ANALYZE on partition table.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      259cb9e7
    • mask all signal in the udp pthreads · 54451fc0
      盏一 提交于
      In some cases, some signals (like SIGQUIT) that should only be
      processed by the main thread of the postmaster may be dispatched to rxThread.
      So we should and it is safe to block all signals in the udp pthreads.
      
      Fix #11006
      54451fc0
    • L
      docs - remove duplicate left nav entry (#11040) · 83afc602
      Lisa Owen 提交于
      83afc602
    • L
      e08cedf8
    • J
      Pin PR resource to v0.21 to avoid Github abuse rate limit · f4bf9be6
      Jesse Zhang 提交于
      We started hitting this on Thursday, and there's been ongoing report
      from the community about this as well. While upstream is figuring out a
      long term solution [1], we've been advised [2] to pin to the previous
      release (v0.21.0) to avoid being blocked for hours at once.
      
      [1]: https://github.com/telia-oss/github-pr-resource/pull/238
      [2]: https://github.com/telia-oss/github-pr-resource/pull/238#issuecomment-714830491
      f4bf9be6
    • A
      Validate cluster state during regression tests · 937187e9
      Ashwin Agrawal 提交于
      It is often observed in CI that a test that leaves the cluster in an
      inconsistent state (e.g. primary-mirror pair is not in sync, or
      primary has not finished crash recovery) cause several following tests
      in the schedule to fail.  What's worse, the culprit test itself may be
      reported as passed because its validation criteria did not include the
      state of the cluster.  This has found to mislead debugging efforts,
      ultimately waste of time.
      
      To make debugging CI failures more efficient, this patch enhances
      pg_regress to perform the validation internally. If the validation
      fails, further testing is aborted.
      
      The validation is performed before running a group of tests specified
      on one line in the schedule file. Also, validation is performed before
      every single test, if running in serialized fashion.
      
      When the cluster validation is found to fail, the culprit is in the
      previously run test group.
      
      This patch is built on the ground work and analysis laid out in
      PR #9865 and PR #10825 by Wu Hao and Asim R P.
      Reviewed-by: NAsim R P <pasim@vmware.com>
      937187e9
  6. 27 10月, 2020 6 次提交
    • H
      EXCLUDE in window functions works now, remove 'gp_ignore_window_exclude'. · 8299a524
      Heikki Linnakangas 提交于
      Previously, GPDB did not support the SQL "EXCLUDE [CURRENT ROW | GROUP |
      TIES]" syntax in window functions. We got support for that from upstream
      with the PostgreSQL v12 merge. That left the GUC obsolete and unused.
      
      Update the 'olap_window' test accordingly. NOTE: 'olap_window' test isn't
      currently run as part of the regression suite! I don't know why it's been
      neglected like that, but that's not this patch's fault. The upstream
      'window' test has queries with the EXCLUDE clause, so it's covered.
      
      Reviewed-by: Jimmy Yih
      8299a524
    • J
      Add query info hook for CTAS query type. (#11050) · c8d84436
      Jialun 提交于
      GPCC want to hook query like
      - create table ... as select ...
      - create materialized view ... as select ...
      c8d84436
    • A
      Fix a shell issue of empty string · 5db43663
      Adam Lee 提交于
      If the $(UBUNTU_PLATFORM) is an empty string, the test command will fail,
      double quotes it to fix.
      
      ```
      --- mock for platform
      /bin/sh: line 0: [: =: unary operator expected
      ```
      5db43663
    • C
      Remove Orca assertions when merging buckets · 34ae3d94
      Chris Hajas 提交于
      These assertions started getting tripped in the previous commit when
      adding tests, but aren't related to the Epsilon change. Rather, we're
      calculating the frequency of a singleton bucket using two different
      methods which causes this assertion to break down. The first method
      (calculating the upper_third) assumes the singleton has 1 NDV and that there is an even distribution
      across the NDVs. The second (in GetOverlapPercentage) calculates a
      "resolution" that is based on Epsilon and assumes the bucket contains
      some small Epsilon frequency. It results in the overlap percentage being
      too high, instead it too should likely be based on the NDV.
      
      In practice, this won't have much impact unless the NDV is very small.
      Additionally, the conditional logic is based on the bounds, not
      frequency. However, it would be good to align in the future so our
      statistics calculations are simpler to understand and predictable.
      
      For now, we'll remove the assertions and add a TODO. Once we align the
      methods, we should add these assertions back.
      34ae3d94
    • C
      Fix stats bucket logic for Double values in UNION queries in Orca · ba4deed0
      Chris Hajas 提交于
      When merging statistics buckets for UNION and UNION ALL queries
      involving a column that maps to Double (eg: floats, numeric, time
      related types), we could end up in an infinite loop. This occurred if
      the bucket boundaries that we compared were within a very small value,
      defined in Orca as Epsilon. While we considered that two values were
      equal if they were within Epsilon, we didn't when computing whether
      datum1 < datum2. Therefore we'd get into a situation where a datum
      could be both equal to and less than another datum, which the logic
      wasn't able to handle.
      
      The fix is to make sure we have a hard boundary of when we consider a
      datum less than another datum by including the epsilon logic in all
      datum comparisons. Now, 2 datums are equal if they are within epsilon,
      but datum1 is less than datum 2 only if datum1 < datum2 - epsilon.
      
      Also add some tests since we didn't have any tests for types that mapped
      to Double.
      ba4deed0
    • H
      Make 'rows' estimate more accurate for plans that fetch only a few rows. · f4d48358
      Heikki Linnakangas 提交于
      In commit c5f6dbbe, we changed the row and cost estimates on plan nodes
      to represent per-segment costs. That made some estimates worse, because
      the effects of the estimate "clamping" compounds. Per my comment on the
      PR back then:
      
      > One interesting effect of this change, that explains many of the
      > plan changes: If you have a table with very few rows, or e.g. a qual
      > like id = 123 that matches exactly one row, the Seq/Index Scan on it
      > will be marked with rows=1. It now means that we estimate that every
      > segment returns one row, although in reality, only one of them will
      > return a row, and the rest will return nothing. That's because the
      > row count estimates are "clamped" in the planner to at least
      > 1. That's not a big deal on its own, but if you then have e.g. a
      > Gather Motion on top of the Scan, the planner will estimate that the
      > Gather Motion returns as many rows as there are segments. If you
      > have e.g. 100 segments, that's relatively a big discrepancy, with
      > 100 rows vs 1. I don't think that's a big problem in practice, I
      > don't think most plans are very sensitive to that kind of a
      > misestimate. What do you think?
      >
      > If we wanted to fix that, perhaps we should stop "clamping" the
      > estimates to 1. I don't think there's any fundamental reason we need
      > to do it. Perhaps clamp down to 1 / numsegments instead.
      
      But I came up with a less intrusive idea, implemented in this commit:
      Most Motion nodes have a "parent" RelOptInfo, and the RelOptInfo
      contains an estimate of the total number of rows, before dividing it
      with the number of segments or clamping. So if the row estimate we get
      from the subpath seems clamped to 1.0, we look at the row estimate on
      the underlying RelOptInfo instead, and use that if it's smaller. That
      makes the row count estimates better for plans that fetch a single row
      or a few rows, same as they were before commit c5f6dbbe. Not all
      RelOptInfos have a row count estimate, and the subpaths estimate is
      more accurate if the number of rows produced by the path differs from
      the number of rows in the underlying relation, e.g.  because of a
      ProjectSet node, so we still prefer the subpath's estimate if it
      doesn't seem clamped.
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      f4d48358
  7. 26 10月, 2020 3 次提交
  8. 24 10月, 2020 1 次提交
    • D
      gpstart: testing of improve handling of down segment hosts · e39465ae
      David Krieger 提交于
      The tests in commit be5d11e2 contained a typo that caused the changes
      in the Scenario "gpstart starts even if the standby host is unreachable"
      to not properly cleanup after itself.  Though the test feature still
      passes, this leaves a bug to be found later when more tests are added.
      e39465ae
  9. 23 10月, 2020 13 次提交
    • A
      Relfrozenxid must be invalid for append-optimized tables · e68d5b8a
      Asim R P 提交于
      Append-optimized tables do not contain transaction information in
      their tuples.  Therefore, pg_class.relfrozenxid must remain invalid.
      This is being done correctly during table creation, however, when the
      table was rewritten, the relfrozenxid was accidentally set.  Fix it
      such that diff with upstream is minimised.  In particular, the function
      "should_have_valid_relfozenxid" is removed.
      
      The fixme comments that led me to this bug are also removed.
      
      Reviewed by: Ashwin Agrawal
      e68d5b8a
    • D
      Fix CLOSE_WAIT leaks when Gang recycling · 990454e8
      dh-cloud 提交于
      Postgresql libpq document:
      
      > Note that when PQconnectStart or PQconnectStartParams returns a
      > non-null pointer, you must call PQfinish when you are finished
      > with it, in order to dispose of the structure and any associated
      > memory blocks. **This must be done even if the connection attempt
      > fails or is abandoned**.
      
      However, cdbconn_disconnect() function did not call PQfinish when
      CONNECTION_BAD, it can cause socket leaks (CLOSE_WAIT state).
      990454e8
    • A
      gprecoverseg: log the error if pg_rewind fails · 57756cc0
      Adam Lee 提交于
      It didn't log the error message before if pg_rewind fails, fix that to make
      DBA/field/developer's life eaisier.
      
      Before this:
      ```
      20201022:15:19:10:011118 gprecoverseg:earth:adam-[INFO]:-Running pg_rewind on required mirrors
      20201022:15:19:12:011118 gprecoverseg:earth:adam-[WARNING]:-Incremental recovery failed for dbid 2. You must use gprecoverseg -F to recover the segment.
      20201022:15:19:12:011118 gprecoverseg:earth:adam-[INFO]:-Starting mirrors
      20201022:15:19:12:011118 gprecoverseg:earth:adam-[INFO]:-era is 0406b847bf226356_201022151031
      ```
      
      After this:
      ```
      20201022:15:33:31:019577 gprecoverseg:earth:adam-[INFO]:-Running pg_rewind on required mirrors
      20201022:15:33:31:019577 gprecoverseg:earth:adam-[WARNING]:-pg_rewind: fatal: could not find common ancestor of the source and target cluster's timelines
      20201022:15:33:31:019577 gprecoverseg:earth:adam-[WARNING]:-Incremental recovery failed for dbid 2. You must use gprecoverseg -F to recover the segment.
      20201022:15:33:31:019577 gprecoverseg:earth:adam-[INFO]:-Starting mirrors
      20201022:15:33:31:019577 gprecoverseg:earth:adam-[INFO]:-era is 0406b847bf226356_201022151031
      ```
      57756cc0
    • D
      9a4c1c0b
    • J
      Decorate virtual functions with "override" · aedfd1f5
      Jesse Zhang 提交于
      This conforms our code to the best practice that each polymorphic
      function should have exactly one of "virtual", "override", and "final"
      specifier. I made this change using the following invocation:
      
      clang-tidy -checks '-*,modernize-use-override'
      
      Once we've made this change, ideally this practice can be enforced in
      CI. We can just run clang-tidy with a single "modernize-use-override"
      check to start with. Or we can see if our compilers are helpful enough.
      Fortunately Clang already issues warnings (turned into errors by
      -Werror) when we have inconsistent use of "override", and GCC appears to
      have something similar (-Wsuggest-override) in versions 9.2 and later.
      aedfd1f5
    • J
      Publicize deleted member functions · 76b1b6eb
      Jesse Zhang 提交于
      Now that we are explicitly declaring copy-assignment operators and copy
      constructors as deleted, we should also make them public -- private and
      =delete don't make much sense in combination, and this results in the
      best diagnostics in practice [1].
      
      In the previous commit where private unimplemented special member
      functions are changed to be declared as deleted, we left the new
      declarations private. This follows up by moving those deleted functions
      to the public section of their classes.
      
      I made this change guided by tooling: even though it doesn't offer a
      fix, clang-tidy is useful enough to emit diagnostics that look like the
      following:
      
      /home/pivotal/workspace/gpdb/src/backend/gporca/libgpopt/include/gpopt/base/CColRef.h:76:2: warning: deleted member function should be public [modernize-use-equals-delete]
              CColRef(const CColRef &) = delete;
              ^
      
      The process involved of making this change is a tale of much sorrow,
      blood and tears. Suffice it to say 20 different regular expressions were
      involved, and I'm edging closer to a PTSD when looking at backslashes.
      
      References:
      [1] https://abseil.io/tips/143#summary
      76b1b6eb
    • J
      Modernize: use equals-default and equals-delete · 9f2e9344
      Jesse Zhang 提交于
      This commit replaces two pre-C++ 11 practices with their modern, more
      intent-expressive equivalents: "disallowed" copy and assignments, and
      pairs of empty braces for special member function (destructors, default
      constructors, and etc) bodies.
      
      "Disallowed" copy constructors / assignment
      -------------------------------------------
      Old: private, unimplemented copy constructors and copy-assignment
      operators in classes. These are usually paired with a semi-descriptive
      comment.
      
      New: publicly declare them as "= delete".
      
      Fun fact: we had 13 spellings in comments on disallowed copy
      constructors:
      
      01. disable copy ctor
      02. hidden copy ctor
      03. inaccessible copy ctor
      04. no copy ctor
      05. no default copy ctor
      06. private copy ctor
      07. private no copy ctor
      08. disabled copy constructor
      09. no copy constructor
      10. private copy constructor
      11. copy c'tor - not defined
      12. no copy c'tor
      13. private copy c'tor
      
      This commit removes all of them, because the new code ("~T() = delete")
      already clearly expresses the intent of disallowing copy and assignment.
      
      To keep the history clear, this commit leaves the declaration of
      "prohibited" functions private. A forthcoming commit will wholesale
      change them to public.
      
      Defaulted special member functions
      ----------------------------------
      Old:
      struct A {
        A() {}
        ~A();
      };
      A::~A() {}
      
      New:
      struct A {
        A() = default;
        ~A();
      };
      A::~A() = default;
      
      Replacing empty braces with defaulting not only makes them more clear,
      they also enable more opportunities in compiler optimizations, as e.g.
      some defaulted functions might be recognized as trivial.
      
      Most of this commit is produced by running clang-tidy with an invocation
      like the following (plus some CMake and shell tricks):
      
      clang-tidy-12 -header-filter 'gpdbcost|gpopt|gpos|naucrates' -checks '-*,modernize-use-equals-delete,modernize-use-equals-default'
      
      The tool uses a slightly conservative heuristic to detect a large
      portion of the two outdated patterns above and rewrite them into using
      "= delete" and "= default". Making the "= delete" functions public, is
      sadly a FIXME item, so we'll have to do it by hand (in a forthcoming
      commit).
      9f2e9344
    • J
      Avoid ".." in include paths to accommodate tooling. · d5bf6eae
      Jesse Zhang 提交于
      The current setup in CMake (and Makefile's too, but that's an even
      harder problem) leads to equivalent-but-not-identical header include
      paths (-I) for the same directory, e.g. -Ilibgpopt/include vs
      -Ilibgpdbcost/../libgpopt/include .
      
      This confuses Clang-based tooling into identifying multiple paths for
      the same header -- the difference being the extra sibling directory
      followed by dot-dot, or "niece" directory followed by two levels of
      dot-dot, and so forth -- as different headers. That, in turn, undermines
      the conflict resolution and edit deduplication features in Clang's
      refactoring engine, leading to duplicate edits when applying FixIt's.
      
      This commit applies some fairly simple fixes to spell the sibling
      directories in a way that generates consistent include paths, so that
      Clang tooling are more functional. The Makefile's are left unchanged as
      that is a lot more difficult to make "right". One can argue that we
      _might_ want to instead transform the intermediate representation of
      Clang's "replacement" YAML files, but that's left for another day.
      d5bf6eae
    • J
      Remove unused private fields · fcbbe77e
      Jesse Zhang 提交于
      During the development of a forthcoming commit that replaces private
      unimplemented special member functions with explicitly deleted ones, we
      got a surprising improvement of compiler diagnostics: Clang started to
      see a lot of unused private fields that are masked by the fact that
      there were unimplemented functions in the class. This makes sense
      because the ostensibly unused field _could be_ used by a method whose
      definition compiler hasn't seen in the current translation unit -- it
      also suggests that the compiler would be better equipped at detecting
      this if we had used whole-program analysis.
      
      A sample of the errors looks like the following:
      
      In file included from CMiniDumper.cpp:14:
      In file included from ../../../../../../src/backend/gporca/libgpos/include/gpos/error/CErrorContext.h:17:
      ../../../../../../src/backend/gporca/libgpos/include/gpos/error/CMiniDumper.h:32:15: error: private field 'm_mp' is not used [-Werror,-Wunused-private-field]
              CMemoryPool *m_mp;
                           ^
      1 error generated.
      
      There are 12 such warnings, and this commit fixes them all. Note that
      code mortality is a chain reaction: oftentimes, a variable (including
      parameter) is only live because it's passed to another variable. While I
      was at it, I also performed chained removal of variables and parameters
      that became dead after removing the dead fields.
      fcbbe77e
    • J
      Remove debug-only private fields in release builds · 28d15c6e
      Jesse Zhang 提交于
      We have a bunch of classes with private fields that are used only in
      debug builds, some of them are probably opportunities for complete
      removal. This is exposed by an upcoming commit that enforces the use of
      "=delete" and "=default" throughout the codebase.
      
      I've attempted to solve this by adding the GPOS_ASSERTS_ONLY attribute,
      but GCC doesn't like that, throwing errors like the following:
      
      In file included from ../src/backend/gporca/libgpos/include/gpos/common/clibwrapper.h:22,
                       from ../src/backend/gporca/libgpos/include/gpos/error/CMessage.h:21,
                       from ../src/backend/gporca/libgpos/include/gpos/error/CMessageTable.h:14,
                       from ../src/backend/gporca/libgpos/include/gpos/error/CMessageRepository.h:14,
                       from ../src/backend/gporca/libgpos/src/_api.cpp:15:
      ../src/backend/gporca/libgpos/include/gpos/attributes.h:15:43: error: ‘unused’ attribute ignored [-Werror=attributes]
         15 | #define GPOS_UNUSED __attribute__((unused))
            |                                           ^
      ../src/backend/gporca/libgpos/include/gpos/attributes.h:21:27: note: in expansion of macro ‘GPOS_UNUSED’
         21 | #define GPOS_ASSERTS_ONLY GPOS_UNUSED
            |                           ^~~~~~~~~~~
      ../src/backend/gporca/libgpos/include/gpos/memory/CAutoMemoryPool.h:56:31: note: in expansion of macro ‘GPOS_ASSERTS_ONLY’
         56 |  ELeakCheck m_leak_check_type GPOS_ASSERTS_ONLY;
            |                               ^~~~~~~~~~~~~~~~~
      cc1plus: all warnings being treated as errors
      
      So we're back to the good ol' #ifdef GPOS_DEBUG.
      
      This is the first of a pair of manual changes. In the immediately
      following commit I'll remove the remaining (unconditionally) unused
      private fields.
      28d15c6e
    • A
      Add --without-python to resource group run_tests · aee5f16d
      Ashwin Agrawal 提交于
      Seems resource group run_tests task didn't configure with Python
      before and seems image doesn't have PYTHON, as changing the configure
      default failed the job. Hence, explicitely adding --without-python to
      retain old functionality with changed default.
      aee5f16d
    • D
      Copy and follow symlinks in run_explain_suite pipeline · 8c204bd5
      David Kimura 提交于
      This allows us to reduce code duplication of workload SQL scripts
      8c204bd5
    • D
      Add workload3 to explain pipeline · 91ed33c9
      David Kimura 提交于
      91ed33c9