<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>redis &amp;mdash; Little Step</title>
    <link>https://rex.writeas.com/tag:redis</link>
    <description></description>
    <pubDate>Sat, 09 May 2026 23:26:45 +0000</pubDate>
    <item>
      <title>A very simple task queue rq</title>
      <link>https://rex.writeas.com/a-very-simple-task-queue-rq?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There a lot of tasks in computer that are not appropriate for working synchronizing, such as media transcoding.&#xA;&#xA;For these tasks, the work will be submitted to a central controlled service, and then dispatched to many workers.&#xA;&#xA;Message queue is commonly used in this case for communication during controller and workers, and sometimes even used to implemented task queue.&#xA;&#xA;RQ is a very simple task queue with Redis, and is easy using, you even don&#39;t need to write specific code for workers.&#xA;&#xA;The producer in RQ side enqueues the data and object to Redis queue, the object is serialized with Python&#39;s pickle lib. And the worker retries task from the Redis queue, and deserializes the job and fork one process to do the actual work.&#xA;&#xA;There is a simple example to show the simpliciy for the logic.&#xA;&#xA;You have one function to do the real job, such as:&#xA;&#xA;fib.py&#xA;def slowfib(n):&#xA;    if n &lt;= 1:&#xA;        return 1&#xA;    else:&#xA;        return slowfib(n-1) + slowfib(n-2)&#xA;&#xA;And you create jobs and enqueue:&#xA;&#xA;runexample.py&#xA;import os&#xA;import time&#xA;&#xA;from rq import Connection, Queue&#xA;&#xA;from fib import slowfib&#xA;&#xA;def main():&#xA;    # Range of Fibonacci numbers to compute&#xA;    fibrange = range(20, 34)&#xA;&#xA;    # Kick off the tasks asynchronously&#xA;    asyncresults = {}&#xA;    q = Queue()&#xA;    for x in fibrange:&#xA;        asyncresults[x] = q.enqueue(slowfib, x)&#xA;&#xA;    starttime = time.time()&#xA;    done = False&#xA;    while not done:&#xA;        os.system(&#39;clear&#39;)&#xA;        print(&#39;Asynchronously: (now = %.2f)&#39; % (time.time() - starttime,))&#xA;        done = True&#xA;        for x in fibrange:&#xA;            result = asyncresults[x].returnvalue&#xA;            if result is None:&#xA;                done = False&#xA;                result = &#39;(calculating)&#39;&#xA;            print(&#39;fib(%d) = %s&#39; % (x, result))&#xA;        print(&#39;&#39;)&#xA;        print(&#39;To start the actual in the background, run a worker:&#39;)&#xA;        print(&#39;    python examples/runworker.py&#39;)&#xA;        time.sleep(0.2)&#xA;&#xA;    print(&#39;Done&#39;)&#xA;&#xA;if name == &#39;main&#39;:&#xA;    # Tell RQ what Redis connection to use&#xA;    with Connection():&#xA;        main()&#xA;&#xA;On the producer side, you run like this:&#xA;&#xA;python3 runexample.py&#xA;Asynchronously: (now = 8.04)&#xA;fib(20) = 10946&#xA;fib(21) = 17711&#xA;fib(22) = 28657&#xA;fib(23) = 46368&#xA;fib(24) = 75025&#xA;fib(25) = 121393&#xA;fib(26) = 196418&#xA;fib(27) = 317811&#xA;fib(28) = 514229&#xA;fib(29) = 832040&#xA;fib(30) = 1346269&#xA;fib(31) = 2178309&#xA;fib(32) = 3524578&#xA;fib(33) = 5702887&#xA;&#xA;To start the actual in the background, run a worker:&#xA;    python examples/runworker.py&#xA;Done&#xA;&#xA;On the worker side, you only need this(but make sure this command executed at the same directory of fib.py):&#xA;&#xA;rqworker&#xA;15:36:32 Worker rq:worker:bd9fbdd72217489288bcf6c47e499f9c: started, version 1.11.0&#xA;15:36:32 Subscribing to channel rq:pubsub:bd9fbdd72217489288bcf6c47e499f9c&#xA;15:36:32 *** Listening on default...&#xA;15:36:32 Cleaning registries for queue: default&#xA;15:36:32 default: fib.slowfib(20) (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)&#xA;15:36:32 default: Job OK (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)&#xA;15:36:32 Result is kept for 500 seconds&#xA;&#xA;But there are two things we need to pay attention, a) the actually function for the task need to be in a separate file(fib.py in this case), and b)rqworker need to be executed under the same source directory of producer(run\example.py in this case).&#xA;&#xA;#rq #redis #taskq]]&gt;</description>
      <content:encoded><![CDATA[<p>There a lot of tasks in computer that are not appropriate for working synchronizing, such as media transcoding.</p>

<p>For these tasks, the work will be submitted to a central controlled service, and then dispatched to many workers.</p>

<p>Message queue is commonly used in this case for communication during controller and workers, and sometimes even used to implemented task queue.</p>

<p><a href="https://python-rq.org" rel="nofollow">RQ</a> is a very simple task queue with Redis, and is easy using, you even don&#39;t need to write specific code for workers.</p>

<p>The producer in RQ side enqueues the data and object to Redis queue, the object is serialized with Python&#39;s pickle lib. And the worker retries task from the Redis queue, and deserializes the job and fork one process to do the actual work.</p>

<p>There is a simple example to show the simpliciy for the logic.</p>

<p>You have one function to do the real job, such as:</p>

<pre><code class="language-python"># fib.py
def slow_fib(n):
    if n &lt;= 1:
        return 1
    else:
        return slow_fib(n-1) + slow_fib(n-2)
</code></pre>

<p>And you create jobs and enqueue:</p>

<pre><code class="language-python"># run_example.py
import os
import time

from rq import Connection, Queue

from fib import slow_fib


def main():
    # Range of Fibonacci numbers to compute
    fib_range = range(20, 34)

    # Kick off the tasks asynchronously
    async_results = {}
    q = Queue()
    for x in fib_range:
        async_results[x] = q.enqueue(slow_fib, x)

    start_time = time.time()
    done = False
    while not done:
        os.system(&#39;clear&#39;)
        print(&#39;Asynchronously: (now = %.2f)&#39; % (time.time() - start_time,))
        done = True
        for x in fib_range:
            result = async_results[x].return_value
            if result is None:
                done = False
                result = &#39;(calculating)&#39;
            print(&#39;fib(%d) = %s&#39; % (x, result))
        print(&#39;&#39;)
        print(&#39;To start the actual in the background, run a worker:&#39;)
        print(&#39;    python examples/run_worker.py&#39;)
        time.sleep(0.2)

    print(&#39;Done&#39;)


if __name__ == &#39;__main__&#39;:
    # Tell RQ what Redis connection to use
    with Connection():
        main()
</code></pre>

<p>On the producer side, you run like this:</p>

<pre><code class="language-shell">python3 run_example.py
Asynchronously: (now = 8.04)
fib(20) = 10946
fib(21) = 17711
fib(22) = 28657
fib(23) = 46368
fib(24) = 75025
fib(25) = 121393
fib(26) = 196418
fib(27) = 317811
fib(28) = 514229
fib(29) = 832040
fib(30) = 1346269
fib(31) = 2178309
fib(32) = 3524578
fib(33) = 5702887

To start the actual in the background, run a worker:
    python examples/run_worker.py
Done

</code></pre>

<p>On the worker side, you only need this(but make sure this command executed at the same directory of fib.py):</p>

<pre><code class="language-shell">rqworker
15:36:32 Worker rq:worker:bd9fbdd72217489288bcf6c47e499f9c: started, version 1.11.0
15:36:32 Subscribing to channel rq:pubsub:bd9fbdd72217489288bcf6c47e499f9c
15:36:32 *** Listening on default...
15:36:32 Cleaning registries for queue: default
15:36:32 default: fib.slow_fib(20) (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)
15:36:32 default: Job OK (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)
15:36:32 Result is kept for 500 seconds
</code></pre>

<p>But there are two things we need to pay attention, a) the actually function for the task need to be in a separate file(fib.py in this case), and b)rqworker need to be executed under the same source directory of producer(run_example.py in this case).</p>

<p><a href="https://rex.writeas.com/tag:rq" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">rq</span></a> <a href="https://rex.writeas.com/tag:redis" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">redis</span></a> <a href="https://rex.writeas.com/tag:taskq" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">taskq</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/a-very-simple-task-queue-rq</guid>
      <pubDate>Wed, 24 Aug 2022 07:43:14 +0000</pubDate>
    </item>
    <item>
      <title>Redis Performance Enhancement</title>
      <link>https://rex.writeas.com/redis-performance-enhancement-wc0j?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Recently we find our Redis service is under heavy load, very high CPU(80% - 90%), and QPS(100k counts/second).&#xA;&#xA;Altough the number of clients and users are increasing but the pace is not as high as the load.&#xA;&#xA;After check the slow log and monitor graph, we find several causes:&#xA;&#xA;Usage of bad performance commands in production environment, such as KEYS, ZRANGE, SMEMBERS&#xA;&#xA;Complicated tasks in Lua script&#xA;&#xA;High QPS for EXISTS command&#xA;&#xA;For the first cause, we replace those commands with SCAN like commands, such as SCAN, ZSCAN, and SSCAN.&#xA;&#xA;As for Lua script, the only reason we use it is to keep commands in one transaction. But if the logic is very complicated and even involves the bad performance commands, it slows the performance badly. The solution is to separate the big script to more small scripts or even commands.&#xA;&#xA;And for EXISTS command, we use is as ID check in many cases. So we solve it by keeping a copy of these IDs in memory and syncing with Redis periodically.&#xA;&#xA;After review of the causes of bad performance, all causes are very low level. The reason we don&#39;t avoid them is that we pay few attention on performance other than business logic. But with the increase of clients, the problem will be more and more worse. So better solve at the beginning, design phase, not with a hurry only for work done.&#xA;&#xA;\#Redis #Database #NoSQL]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently we find our Redis service is under heavy load, very high CPU(80% – 90%), and QPS(100k counts/second).</p>

<p>Altough the number of clients and users are increasing but the pace is not as high as the load.</p>

<p>After check the slow log and monitor graph, we find several causes:</p>
<ol><li><p>Usage of bad performance commands in production environment, such as <strong>KEYS</strong>, <strong>ZRANGE</strong>, <strong>SMEMBERS</strong></p></li>

<li><p>Complicated tasks in Lua script</p></li>

<li><p>High QPS for EXISTS command</p></li></ol>

<p>For the first cause, we replace those commands with <strong>SCAN</strong> like commands, such as <strong>SCAN</strong>, <strong>ZSCAN</strong>, and <strong>SSCAN</strong>.</p>

<p>As for Lua script, the only reason we use it is to keep commands in one transaction. But if the logic is very complicated and even involves the bad performance commands, it slows the performance badly. The solution is to separate the big script to more small scripts or even commands.</p>

<p>And for EXISTS command, we use is as ID check in many cases. So we solve it by keeping a copy of these IDs in memory and syncing with Redis periodically.</p>

<p>After review of the causes of bad performance, all causes are very low level. The reason we don&#39;t avoid them is that we pay few attention on performance other than business logic. But with the increase of clients, the problem will be more and more worse. So better solve at the beginning, design phase, not with a hurry only for work done.</p>

<p>#Redis <a href="https://rex.writeas.com/tag:Database" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Database</span></a> <a href="https://rex.writeas.com/tag:NoSQL" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">NoSQL</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/redis-performance-enhancement-wc0j</guid>
      <pubDate>Wed, 04 Mar 2020 01:38:20 +0000</pubDate>
    </item>
  </channel>
</rss>