1. 28 9月, 2017 2 次提交
  2. 23 7月, 2017 2 次提交
    • A
      Modules: don't crash when Lua calls a module blocking command. · 31404355
      antirez 提交于
      Lua scripting does not support calling blocking commands, however all
      the native Redis commands are flagged as "s" (no scripting flag), so
      this is not possible at all. With modules there is no such mechanism in
      order to flag a command as non callable by the Lua scripting engine,
      moreover we cannot trust the modules users from complying all the times:
      it is likely that modules will be released to have blocking commands
      without such commands being flagged correctly, even if we provide a way to
      signal this fact.
      
      This commit attempts to address the problem in a short term way, by
      detecting that a module is trying to block in the context of the Lua
      scripting engine client, and preventing to do this. The module will
      actually believe to block as usually, but what happens is that the Lua
      script receives an error immediately, and the background call is ignored
      by the Redis engine (if not for the cleanup callbacks, once it
      unblocks).
      
      Long term, the more likely solution, is to introduce a new call called
      RedisModule_GetClientFlags(), so that a command can detect if the caller
      is a Lua script, and return an error, or avoid blocking at all.
      
      Being the blocking API experimental right now, more work is needed in
      this regard in order to reach a level well blocking module commands and
      all the other Redis subsystems interact peacefully.
      
      Now the effect is like the following:
      
          127.0.0.1:6379> eval "redis.call('hello.block',1,5000)" 0
          (error) ERR Error running script (call to
          f_b5ba35ff97bc1ef23debc4d6e9fd802da187ed53): @user_script:1: ERR
          Blocking module command called from Lua script
      
      This commit fixes issue #4127 in the short term.
      31404355
    • A
      Fix typo in unblockClientFromModule() top comment. · 5bfdfbe1
      antirez 提交于
      5bfdfbe1
  3. 20 7月, 2017 1 次提交
    • A
      Fix two bugs in moduleTypeLookupModuleByID(). · b1c2e1a1
      antirez 提交于
      The function cache was not working at all, and the function returned
      wrong values if there where two or more modules exporting native data
      types.
      
      See issue #4131 for more details.
      b1c2e1a1
  4. 14 7月, 2017 2 次提交
  5. 11 7月, 2017 1 次提交
  6. 10 7月, 2017 1 次提交
  7. 06 7月, 2017 1 次提交
  8. 05 7月, 2017 1 次提交
  9. 04 7月, 2017 1 次提交
  10. 27 6月, 2017 1 次提交
    • A
      RDB modules values serialization format version 2. · 365dd037
      antirez 提交于
      The original RDB serialization format was not parsable without the
      module loaded, becuase the structure was managed only by the module
      itself. Moreover RDB is a streaming protocol in the sense that it is
      both produce di an append-only fashion, and is also sometimes directly
      sent to the socket (in the case of diskless replication).
      
      The fact that modules values cannot be parsed without the relevant
      module loaded is a problem in many ways: RDB checking tools must have
      loaded modules even for doing things not involving the value at all,
      like splitting an RDB into N RDBs by key or alike, or just checking the
      RDB for sanity.
      
      In theory module values could be just a blob of data with a prefixed
      length in order for us to be able to skip it. However prefixing the values
      with a length would mean one of the following:
      
      1. To be able to write some data at a previous offset. This breaks
      stremaing.
      2. To bufferize values before outputting them. This breaks performances.
      3. To have some chunked RDB output format. This breaks simplicity.
      
      Moreover, the above solution, still makes module values a totally opaque
      matter, with the fowllowing problems:
      
      1. The RDB check tool can just skip the value without being able to at
      least check the general structure. For datasets composed mostly of
      modules values this means to just check the outer level of the RDB not
      actually doing any checko on most of the data itself.
      2. It is not possible to do any recovering or processing of data for which a
      module no longer exists in the future, or is unknown.
      
      So this commit implements a different solution. The modules RDB
      serialization API is composed if well defined calls to store integers,
      floats, doubles or strings. After this commit, the parts generated by
      the module API have a one-byte prefix for each of the above emitted
      parts, and there is a final EOF byte as well. So even if we don't know
      exactly how to interpret a module value, we can always parse it at an
      high level, check the overall structure, understand the types used to
      store the information, and easily skip the whole value.
      
      The change is backward compatible: older RDB files can be still loaded
      since the new encoding has a new RDB type: MODULE_2 (of value 7).
      The commit also implements the ability to check RDB files for sanity
      taking advantage of the new feature.
      365dd037
  11. 03 5月, 2017 3 次提交
  12. 02 5月, 2017 3 次提交
  13. 29 4月, 2017 1 次提交
  14. 10 4月, 2017 2 次提交
    • A
      Make more obvious why there was issue #3843. · 531647bb
      antirez 提交于
      531647bb
    • A
      Fix modules blocking commands awake delay. · ffefc9f9
      antirez 提交于
      If a thread unblocks a client blocked in a module command, by using the
      RedisMdoule_UnblockClient() API, the event loop may not be awaken until
      the next timeout of the multiplexing API or the next unrelated I/O
      operation on other clients. We actually want the client to be served
      ASAP, so a mechanism is needed in order for the unblocking API to inform
      Redis that there is a client to serve ASAP.
      
      This commit fixes the issue using the old trick of the pipe: when a
      client needs to be unblocked, a byte is written in a pipe. When we run
      the list of clients blocked in modules, we consume all the bytes
      written in the pipe. Writes and reads are performed inside the context
      of the mutex, so no race is possible in which we consume the bytes that
      are actually related to an awake request for a client that should still
      be put into the list of clients to unblock.
      
      It was verified that after the fix the server handles the blocked
      clients with the expected short delay.
      
      Thanks to @dvirsky for understanding there was such a problem and
      reporting it.
      ffefc9f9
  15. 06 3月, 2017 1 次提交
  16. 01 3月, 2017 1 次提交
  17. 21 2月, 2017 1 次提交
    • A
      Use SipHash hash function to mitigate HashDos attempts. · adeed29a
      antirez 提交于
      This change attempts to switch to an hash function which mitigates
      the effects of the HashDoS attack (denial of service attack trying
      to force data structures to worst case behavior) while at the same time
      providing Redis with an hash function that does not expect the input
      data to be word aligned, a condition no longer true now that sds.c
      strings have a varialbe length header.
      
      Note that it is possible sometimes that even using an hash function
      for which collisions cannot be generated without knowing the seed,
      special implementation details or the exposure of the seed in an
      indirect way (for example the ability to add elements to a Set and
      check the return in which Redis returns them with SMEMBERS) may
      make the attacker's life simpler in the process of trying to guess
      the correct seed, however the next step would be to switch to a
      log(N) data structure when too many items in a single bucket are
      detected: this seems like an overkill in the case of Redis.
      
      SPEED REGRESION TESTS:
      
      In order to verify that switching from MurmurHash to SipHash had
      no impact on speed, a set of benchmarks involving fast insertion
      of 5 million of keys were performed.
      
      The result shows Redis with SipHash in high pipelining conditions
      to be about 4% slower compared to using the previous hash function.
      However this could partially be related to the fact that the current
      implementation does not attempt to hash whole words at a time but
      reads single bytes, in order to have an output which is endian-netural
      and at the same time working on systems where unaligned memory accesses
      are a problem.
      
      Further X86 specific optimizations should be tested, the function
      may easily get at the same level of MurMurHash2 if a few optimizations
      are performed.
      adeed29a
  18. 12 1月, 2017 1 次提交
  19. 15 12月, 2016 1 次提交
  20. 13 12月, 2016 1 次提交
    • A
      Replication: fix the infamous key leakage of writable slaves + EXPIRE. · 04542cff
      antirez 提交于
      BACKGROUND AND USE CASEj
      
      Redis slaves are normally write only, however the supprot a "writable"
      mode which is very handy when scaling reads on slaves, that actually
      need write operations in order to access data. For instance imagine
      having slaves replicating certain Sets keys from the master. When
      accessing the data on the slave, we want to peform intersections between
      such Sets values. However we don't want to intersect each time: to cache
      the intersection for some time often is a good idea.
      
      To do so, it is possible to setup a slave as a writable slave, and
      perform the intersection on the slave side, perhaps setting a TTL on the
      resulting key so that it will expire after some time.
      
      THE BUG
      
      Problem: in order to have a consistent replication, expiring of keys in
      Redis replication is up to the master, that synthesize DEL operations to
      send in the replication stream. However slaves logically expire keys
      by hiding them from read attempts from clients so that if the master did
      not promptly sent a DEL, the client still see logically expired keys
      as non existing.
      
      Because slaves don't actively expire keys by actually evicting them but
      just masking from the POV of read operations, if a key is created in a
      writable slave, and an expire is set, the key will be leaked forever:
      
      1. No DEL will be received from the master, which does not know about
      such a key at all.
      
      2. No eviction will be performed by the slave, since it needs to disable
      eviction because it's up to masters, otherwise consistency of data is
      lost.
      
      THE FIX
      
      In order to fix the problem, the slave should be able to tag keys that
      were created in the slave side and have an expire set in some way.
      
      My solution involved using an unique additional dictionary created by
      the writable slave only if needed. The dictionary is obviously keyed by
      the key name that we need to track: all the keys that are set with an
      expire directly by a client writing to the slave are tracked.
      
      The value in the dictionary is a bitmap of all the DBs where such a key
      name need to be tracked, so that we can use a single dictionary to track
      keys in all the DBs used by the slave (actually this limits the solution
      to the first 64 DBs, but the default with Redis is to use 16 DBs).
      
      This solution allows to pay both a small complexity and CPU penalty,
      which is zero when the feature is not used, actually. The slave-side
      eviction is encapsulated in code which is not coupled with the rest of
      the Redis core, if not for the hook to track the keys.
      
      TODO
      
      I'm doing the first smoke tests to see if the feature works as expected:
      so far so good. Unit tests should be added before merging into the
      4.0 branch.
      04542cff
  21. 30 11月, 2016 2 次提交
  22. 24 11月, 2016 1 次提交
  23. 01 11月, 2016 1 次提交
  24. 13 10月, 2016 2 次提交
  25. 07 10月, 2016 4 次提交
  26. 06 10月, 2016 2 次提交
    • A
      Module: Ability to get context from IO context. · 152c1b68
      antirez 提交于
      It was noted by @dvirsky that it is not possible to use string functions
      when writing the AOF file. This sometimes is critical since the command
      rewriting may need to be built in the context of the AOF callback, and
      without access to the context, and the limited types that the AOF
      production functions will accept, this can be an issue.
      
      Moreover there are other needs that we can't anticipate regarding the
      ability to use Redis Modules APIs using the context in order to build
      representations to emit AOF / RDB.
      
      Because of this a new API was added that allows the user to get a
      temporary context from the IO context. The context is auto released
      if obtained when the RDB / AOF callback returns.
      
      Calling multiple time the function to get the context, always returns
      the same one, since it is invalid to have more than a single context.
      152c1b68
    • A
      Copyright notice added to module.c. · 72279e3e
      antirez 提交于
      72279e3e