<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Little Step</title>
    <link>https://rex.writeas.com/</link>
    <description></description>
    <pubDate>Wed, 08 Apr 2026 20:30:49 +0000</pubDate>
    <item>
      <title>JavaScript is a compromise between Netscape and Sun</title>
      <link>https://rex.writeas.com/javascript-is-a-compromise-for-netscape-and-sun?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Recently I listen a podcase related to JASON and XML, and I got to know the interesting fact that JavaScript is a compromise between Netscape and Java, and is definitely related to Java.&#xA;&#xA;Full link: JSON vs XML With Douglas Crockford&#xA;&#xA;  Originally it(JavaScript) wasn’t supposed to be called JavaScript. It was going to be called Moca, and there was a tension between Sun and Netscape.&#xA;    So Sun had been claiming that if you write to the Java virtual machine, it doesn’t matter what operating system you’re running on, and that means we can be liberated from Microsoft. And Netscape said, “If you target all of your applications to the browser, the browser can run on all of the operating systems, so you’re no longer dependent on Microsoft.”&#xA;    So they decided to have an alliance, and the first thing they agreed on was that Netscape would put Java into the Netscape browser, so they did, in the form of applets. So you could write applets in Java and they would run in the Netscape browser.&#xA;    The next thing Sun demanded was, you have to kill Mocha, which I think by that time had been renamed LiveScript, because you’re making this look bad. We’re telling everybody that Java is the last programming language you’ll ever need, and you have this stupid looking thing called LiveScript. Why are you doing that? This is just confusion.&#xA;    So Netscape thought they could do a similar thing for their navigator browser that, if they could get people programming in the same way that they did on HyperCard, on the browser, but now they can have photographs and color and maybe sound effects, it could be a lot more interesting, and you can’t do that in Java.&#xA;    But Sun was not happy about this. They said, “We thought we agreed that Java was going to be how you script the web.” And Netscape probably said, “Listen, we can’t rebuild everything to make it centered around the JVM. That’s too much work and this scripting thing, we have works and is beginner-friendly.”&#xA;    And so, they were at an impasse and their alliance almost broke when someone, it might have been Marc Andreessen, it might have been a joke, suggested that they changed the name of LiveScript to JavaScript, And we’ll tell people it’s not a different language, it’s a subset of Java. It’s just this little reduced version of Java, it’s Java’s stupid little brother. It’s the same thing. It’s not a different thing. And Sun said, “Yeah, okay.” And they held a press conference and they went out and they lied to the world about what JavaScript was, and that’s why the language has this stupid confusing name.&#xA;&#xA;\#json #javascript]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently I listen a podcase related to JASON and XML, and I got to know the interesting fact that JavaScript is a compromise between Netscape and Java, and is definitely related to Java.</p>

<p>Full link: <a href="https://corecursive.com/json-vs-xml-douglas-crockford/" rel="nofollow">JSON vs XML With Douglas Crockford</a></p>

<blockquote><p>Originally it(JavaScript) wasn’t supposed to be called JavaScript. It was going to be called Moca, and there was a tension between Sun and Netscape.</p>

<p>So Sun had been claiming that if you write to the Java virtual machine, it doesn’t matter what operating system you’re running on, and that means we can be liberated from Microsoft. And Netscape said, “If you target all of your applications to the browser, the browser can run on all of the operating systems, so you’re no longer dependent on Microsoft.”</p>

<p>So they decided to have an alliance, and the first thing they agreed on was that Netscape would put Java into the Netscape browser, so they did, in the form of applets. So you could write applets in Java and they would run in the Netscape browser.</p>

<p>The next thing Sun demanded was, you have to kill Mocha, which I think by that time had been renamed LiveScript, because you’re making this look bad. We’re telling everybody that Java is the last programming language you’ll ever need, and you have this stupid looking thing called LiveScript. Why are you doing that? This is just confusion.</p>

<p>So Netscape thought they could do a similar thing for their navigator browser that, if they could get people programming in the same way that they did on HyperCard, on the browser, but now they can have photographs and color and maybe sound effects, it could be a lot more interesting, and you can’t do that in Java.</p>

<p>But Sun was not happy about this. They said, “We thought we agreed that Java was going to be how you script the web.” And Netscape probably said, “Listen, we can’t rebuild everything to make it centered around the JVM. That’s too much work and this scripting thing, we have works and is beginner-friendly.”</p>

<p>And so, they were at an impasse and their alliance almost broke when someone, it might have been Marc Andreessen, it might have been a joke, suggested that they changed the name of LiveScript to JavaScript, And we’ll tell people it’s not a different language, it’s a subset of Java. It’s just this little reduced version of Java, it’s Java’s stupid little brother. It’s the same thing. It’s not a different thing. And Sun said, “Yeah, okay.” And they held a press conference and they went out and they lied to the world about what JavaScript was, and that’s why the language has this stupid confusing name.</p></blockquote>

<p>#json <a href="https://rex.writeas.com/tag:javascript" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">javascript</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/javascript-is-a-compromise-for-netscape-and-sun</guid>
      <pubDate>Thu, 06 Apr 2023 15:13:09 +0000</pubDate>
    </item>
    <item>
      <title>Use std::enable\shared\from\_this for this in smart pointer mode</title>
      <link>https://rex.writeas.com/use-std-enablesharedfrom_this-for-this-in-smart-point-mode?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[As we all know that there is a this pointer for all objects, but if we want to use smart pointer, how to we use.&#xA;&#xA;It turns out that it&#39;s not that easy by creating a share\ptr from this pointer, since we may have many shared\ptrs points to the same object with without knowing each other.&#xA;&#xA;Taking this code as an example, if we create a share\ptr by dangerous function, we will have two share\ptrs pointing to the same object. Both sp1 and sp2 point to same object, and when the two exits it leads problem.&#xA;&#xA;struct S&#xA;{&#xA;  sharedptrS dangerous() {&#xA;    return sharedptrS(this);  // don&#39;t do this!&#xA;  }&#xA;};&#xA;&#xA;int main() {&#xA;  sharedptrS sp1(new S);&#xA;  sharedptrS sp2 = sp-  dangerous();&#xA;  return 0;&#xA;}&#xA;&#xA;How to fix this problem, use the std::enable\shared\from\this help class defined inmemory, it is introduced in C++11.&#xA;&#xA;struct S: enablesharedfromthisS {&#xA;  sharedptrS dangerous() {&#xA;    return sharedfromthis();&#xA;  }&#xA;};&#xA;&#xA;int main() {&#xA;  sharedptrS sp1(new S);&#xA;  sharedptrS sp2 = sp-  dangerous();  // not dangerous&#xA;&#xA;  return 0;&#xA;}&#xA;&#xA;#cpp #share\ptr]]&gt;</description>
      <content:encoded><![CDATA[<p>As we all know that there is a <strong>this</strong> pointer for all objects, but if we want to use smart pointer, how to we use.</p>

<p>It turns out that it&#39;s not that easy by creating a share_ptr from this pointer, since we may have many shared_ptrs points to the same object with without knowing each other.</p>

<p>Taking this code as an example, if we create a share_ptr by dangerous function, we will have two share_ptrs pointing to the same object. Both sp1 and sp2 point to same object, and when the two exits it leads problem.</p>

<pre><code class="language-cpp">struct S
{
  shared_ptr&lt;S&gt; dangerous() {
    return shared_ptr&lt;S&gt;(this);  // don&#39;t do this!
  }
};

int main() {
  shared_ptr&lt;S&gt; sp1(new S);
  shared_ptr&lt;S&gt; sp2 = sp-&gt;dangerous();
  return 0;
}
</code></pre>

<p>How to fix this problem, use the <em>std::enable_shared_from_this</em> help class defined in, it is introduced in C++11.</p>

<pre><code class="language-cpp">struct S: enable_shared_from_this&lt;S&gt; {
  shared_ptr&lt;S&gt; dangerous() {
    return shared_from_this();
  }
};

int main() {
  shared_ptr&lt;S&gt; sp1(new S);
  shared_ptr&lt;S&gt; sp2 = sp-&gt;dangerous();  // not dangerous

  return 0;
}
</code></pre>

<p><a href="https://rex.writeas.com/tag:cpp" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">cpp</span></a> <a href="https://rex.writeas.com/tag:share" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">share</span></a>_ptr</p>
]]></content:encoded>
      <guid>https://rex.writeas.com/use-std-enablesharedfrom_this-for-this-in-smart-point-mode</guid>
      <pubDate>Wed, 24 Aug 2022 08:09:16 +0000</pubDate>
    </item>
    <item>
      <title>A very simple task queue rq</title>
      <link>https://rex.writeas.com/a-very-simple-task-queue-rq?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There a lot of tasks in computer that are not appropriate for working synchronizing, such as media transcoding.&#xA;&#xA;For these tasks, the work will be submitted to a central controlled service, and then dispatched to many workers.&#xA;&#xA;Message queue is commonly used in this case for communication during controller and workers, and sometimes even used to implemented task queue.&#xA;&#xA;RQ is a very simple task queue with Redis, and is easy using, you even don&#39;t need to write specific code for workers.&#xA;&#xA;The producer in RQ side enqueues the data and object to Redis queue, the object is serialized with Python&#39;s pickle lib. And the worker retries task from the Redis queue, and deserializes the job and fork one process to do the actual work.&#xA;&#xA;There is a simple example to show the simpliciy for the logic.&#xA;&#xA;You have one function to do the real job, such as:&#xA;&#xA;fib.py&#xA;def slowfib(n):&#xA;    if n &lt;= 1:&#xA;        return 1&#xA;    else:&#xA;        return slowfib(n-1) + slowfib(n-2)&#xA;&#xA;And you create jobs and enqueue:&#xA;&#xA;runexample.py&#xA;import os&#xA;import time&#xA;&#xA;from rq import Connection, Queue&#xA;&#xA;from fib import slowfib&#xA;&#xA;def main():&#xA;    # Range of Fibonacci numbers to compute&#xA;    fibrange = range(20, 34)&#xA;&#xA;    # Kick off the tasks asynchronously&#xA;    asyncresults = {}&#xA;    q = Queue()&#xA;    for x in fibrange:&#xA;        asyncresults[x] = q.enqueue(slowfib, x)&#xA;&#xA;    starttime = time.time()&#xA;    done = False&#xA;    while not done:&#xA;        os.system(&#39;clear&#39;)&#xA;        print(&#39;Asynchronously: (now = %.2f)&#39; % (time.time() - starttime,))&#xA;        done = True&#xA;        for x in fibrange:&#xA;            result = asyncresults[x].returnvalue&#xA;            if result is None:&#xA;                done = False&#xA;                result = &#39;(calculating)&#39;&#xA;            print(&#39;fib(%d) = %s&#39; % (x, result))&#xA;        print(&#39;&#39;)&#xA;        print(&#39;To start the actual in the background, run a worker:&#39;)&#xA;        print(&#39;    python examples/runworker.py&#39;)&#xA;        time.sleep(0.2)&#xA;&#xA;    print(&#39;Done&#39;)&#xA;&#xA;if name == &#39;main&#39;:&#xA;    # Tell RQ what Redis connection to use&#xA;    with Connection():&#xA;        main()&#xA;&#xA;On the producer side, you run like this:&#xA;&#xA;python3 runexample.py&#xA;Asynchronously: (now = 8.04)&#xA;fib(20) = 10946&#xA;fib(21) = 17711&#xA;fib(22) = 28657&#xA;fib(23) = 46368&#xA;fib(24) = 75025&#xA;fib(25) = 121393&#xA;fib(26) = 196418&#xA;fib(27) = 317811&#xA;fib(28) = 514229&#xA;fib(29) = 832040&#xA;fib(30) = 1346269&#xA;fib(31) = 2178309&#xA;fib(32) = 3524578&#xA;fib(33) = 5702887&#xA;&#xA;To start the actual in the background, run a worker:&#xA;    python examples/runworker.py&#xA;Done&#xA;&#xA;On the worker side, you only need this(but make sure this command executed at the same directory of fib.py):&#xA;&#xA;rqworker&#xA;15:36:32 Worker rq:worker:bd9fbdd72217489288bcf6c47e499f9c: started, version 1.11.0&#xA;15:36:32 Subscribing to channel rq:pubsub:bd9fbdd72217489288bcf6c47e499f9c&#xA;15:36:32 *** Listening on default...&#xA;15:36:32 Cleaning registries for queue: default&#xA;15:36:32 default: fib.slowfib(20) (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)&#xA;15:36:32 default: Job OK (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)&#xA;15:36:32 Result is kept for 500 seconds&#xA;&#xA;But there are two things we need to pay attention, a) the actually function for the task need to be in a separate file(fib.py in this case), and b)rqworker need to be executed under the same source directory of producer(run\example.py in this case).&#xA;&#xA;#rq #redis #taskq]]&gt;</description>
      <content:encoded><![CDATA[<p>There a lot of tasks in computer that are not appropriate for working synchronizing, such as media transcoding.</p>

<p>For these tasks, the work will be submitted to a central controlled service, and then dispatched to many workers.</p>

<p>Message queue is commonly used in this case for communication during controller and workers, and sometimes even used to implemented task queue.</p>

<p><a href="https://python-rq.org" rel="nofollow">RQ</a> is a very simple task queue with Redis, and is easy using, you even don&#39;t need to write specific code for workers.</p>

<p>The producer in RQ side enqueues the data and object to Redis queue, the object is serialized with Python&#39;s pickle lib. And the worker retries task from the Redis queue, and deserializes the job and fork one process to do the actual work.</p>

<p>There is a simple example to show the simpliciy for the logic.</p>

<p>You have one function to do the real job, such as:</p>

<pre><code class="language-python"># fib.py
def slow_fib(n):
    if n &lt;= 1:
        return 1
    else:
        return slow_fib(n-1) + slow_fib(n-2)
</code></pre>

<p>And you create jobs and enqueue:</p>

<pre><code class="language-python"># run_example.py
import os
import time

from rq import Connection, Queue

from fib import slow_fib


def main():
    # Range of Fibonacci numbers to compute
    fib_range = range(20, 34)

    # Kick off the tasks asynchronously
    async_results = {}
    q = Queue()
    for x in fib_range:
        async_results[x] = q.enqueue(slow_fib, x)

    start_time = time.time()
    done = False
    while not done:
        os.system(&#39;clear&#39;)
        print(&#39;Asynchronously: (now = %.2f)&#39; % (time.time() - start_time,))
        done = True
        for x in fib_range:
            result = async_results[x].return_value
            if result is None:
                done = False
                result = &#39;(calculating)&#39;
            print(&#39;fib(%d) = %s&#39; % (x, result))
        print(&#39;&#39;)
        print(&#39;To start the actual in the background, run a worker:&#39;)
        print(&#39;    python examples/run_worker.py&#39;)
        time.sleep(0.2)

    print(&#39;Done&#39;)


if __name__ == &#39;__main__&#39;:
    # Tell RQ what Redis connection to use
    with Connection():
        main()
</code></pre>

<p>On the producer side, you run like this:</p>

<pre><code class="language-shell">python3 run_example.py
Asynchronously: (now = 8.04)
fib(20) = 10946
fib(21) = 17711
fib(22) = 28657
fib(23) = 46368
fib(24) = 75025
fib(25) = 121393
fib(26) = 196418
fib(27) = 317811
fib(28) = 514229
fib(29) = 832040
fib(30) = 1346269
fib(31) = 2178309
fib(32) = 3524578
fib(33) = 5702887

To start the actual in the background, run a worker:
    python examples/run_worker.py
Done

</code></pre>

<p>On the worker side, you only need this(but make sure this command executed at the same directory of fib.py):</p>

<pre><code class="language-shell">rqworker
15:36:32 Worker rq:worker:bd9fbdd72217489288bcf6c47e499f9c: started, version 1.11.0
15:36:32 Subscribing to channel rq:pubsub:bd9fbdd72217489288bcf6c47e499f9c
15:36:32 *** Listening on default...
15:36:32 Cleaning registries for queue: default
15:36:32 default: fib.slow_fib(20) (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)
15:36:32 default: Job OK (0c5dc1dd-b8b8-4a23-9220-1f4f03781c53)
15:36:32 Result is kept for 500 seconds
</code></pre>

<p>But there are two things we need to pay attention, a) the actually function for the task need to be in a separate file(fib.py in this case), and b)rqworker need to be executed under the same source directory of producer(run_example.py in this case).</p>

<p><a href="https://rex.writeas.com/tag:rq" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">rq</span></a> <a href="https://rex.writeas.com/tag:redis" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">redis</span></a> <a href="https://rex.writeas.com/tag:taskq" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">taskq</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/a-very-simple-task-queue-rq</guid>
      <pubDate>Wed, 24 Aug 2022 07:43:14 +0000</pubDate>
    </item>
    <item>
      <title>DNS problem with IOT SIM card</title>
      <link>https://rex.writeas.com/dns-problem-with-iot-sim-card?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[We have a IOT device with several SIM cards aggregated to provided the communication for the upper layer applications.&#xA;&#xA;During the test, we got several communications problems, after a short time debug and contacted with operators, we get to know that is the IOT DNS restrictions.&#xA;&#xA;Say we have two SIM cards, and we want to access example.com.&#xA;&#xA;We send DNS messages via the first one, and after that we can access the site via the first SIM card. If we access the site via the second card, the communication will be blocked. Because from the second card of view, the source is unknown.&#xA;&#xA;To get around of it, the simple solution is to do name resolve periodically from all the SIM cards.&#xA;&#xA;Ping is the first one came to mind, it has a -I option to specify the iterface/address.&#xA;&#xA;-I interface&#xA;       interface  is  either an address, or an interface name.  If interface is an address, it sets source address to specified interface address.  If interface in an&#xA;       interface name, it sets source interface to specified interface.  For IPv6, when doing ping to a link-local scope address, link specification (by the &#39;%&#39;-nota‐&#xA;       tion in destination, or by this option) is required.&#xA;&#xA;But it doesn&#39;t work, it only ensure ICMP message other than DNS message, even we use domain as the target.&#xA;&#xA;The other one is dig, which has the similar option as ping:&#xA;&#xA;-b address[#port]&#xA;    Set the source IP address of the query. The address must be a valid address on one of the host&#39;s network interfaces, or &#34;0.0.0.0&#34; or &#34;::&#34;. An&#xA;    optional port may be specified by appending &#34;#port&#34;&#xA;&#xA;It ensure DNS message via the specific address, and after periodicall dig the problem is solved.&#xA;&#xA;#IOT #dig #DNS]]&gt;</description>
      <content:encoded><![CDATA[<p>We have a IOT device with several SIM cards aggregated to provided the communication for the upper layer applications.</p>

<p>During the test, we got several communications problems, after a short time debug and contacted with operators, we get to know that is the IOT DNS restrictions.</p>

<p>Say we have two SIM cards, and we want to access <strong>example.com</strong>.</p>

<p>We send DNS messages via the first one, and after that we can access the site via the first SIM card. If we access the site via the second card, the communication will be blocked. Because from the second card of view, the source is unknown.</p>

<p>To get around of it, the simple solution is to do name resolve periodically from all the SIM cards.</p>

<p><strong>Ping</strong> is the first one came to mind, it has a <strong>-I</strong> option to specify the iterface/address.</p>

<pre><code class="language-shell">-I interface
       interface  is  either an address, or an interface name.  If interface is an address, it sets source address to specified interface address.  If interface in an
       interface name, it sets source interface to specified interface.  For IPv6, when doing ping to a link-local scope address, link specification (by the &#39;%&#39;-nota‐
       tion in destination, or by this option) is required.
</code></pre>

<p>But it doesn&#39;t work, it only ensure ICMP message other than DNS message, even we use domain as the target.</p>

<p>The other one is <strong>dig</strong>, which has the similar option as ping:</p>

<pre><code class="language-shell">-b address[#port]
    Set the source IP address of the query. The address must be a valid address on one of the host&#39;s network interfaces, or &#34;0.0.0.0&#34; or &#34;::&#34;. An
    optional port may be specified by appending &#34;#&lt;port&gt;&#34;
</code></pre>

<p>It ensure DNS message via the specific address, and after periodicall dig the problem is solved.</p>

<p><a href="https://rex.writeas.com/tag:IOT" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">IOT</span></a> <a href="https://rex.writeas.com/tag:dig" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dig</span></a> <a href="https://rex.writeas.com/tag:DNS" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">DNS</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/dns-problem-with-iot-sim-card</guid>
      <pubDate>Sun, 12 Dec 2021 15:16:31 +0000</pubDate>
    </item>
    <item>
      <title>Write with sudo</title>
      <link>https://rex.writeas.com/write-with-sudo?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Today I learn a trick on nixCraft to save a file without root permission in vim.&#xA;&#xA;The simple case is like this:&#xA;&#xA;You want to change a config file using vim, and after you edit, you got a permission problem, you don&#39;t have the permission.&#xA;&#xA;So how to keep the current edit and save the change, the simple command is :w !sudo tee %.&#xA;&#xA;Explanation as below:&#xA;&#xA;:w – Write a file (actually buffer).&#xA;!sudo – Call shell with sudo command.&#xA;tee – The output of write (vim :w) command redirected using tee.&#xA;% – The % is nothing but current file name.&#xA;&#xA;But what about Emacs, how do we do this in Emacs, we also got one:&#xA;&#xA;C-x C-f /sudo::/path/to/file&#xA;&#xA;It uses Tramp module to do the same thing.&#xA;&#xA;#vim #emacs]]&gt;</description>
      <content:encoded><![CDATA[<p>Today I learn a trick on <a href="https://www.cyberciti.biz/faq/vim-vi-text-editor-save-file-without-root-permission/" rel="nofollow">nixCraft </a>to save a file without root permission in vim.</p>

<p>The simple case is like this:</p>

<p>You want to change a config file using vim, and after you edit, you got a permission problem, you don&#39;t have the permission.</p>

<p>So how to keep the current edit and save the change, the simple command is <strong>:w !sudo tee %</strong>.</p>

<p>Explanation as below:</p>

<pre><code>:w – Write a file (actually buffer).
!sudo – Call shell with sudo command.
tee – The output of write (vim :w) command redirected using tee.
% – The % is nothing but current file name.
</code></pre>

<p>But what about Emacs, how do we do this in Emacs, we also got <a href="https://stackoverflow.com/questions/95631/open-a-file-with-su-sudo-inside-emacs" rel="nofollow">one</a>:</p>

<pre><code>C-x C-f /sudo::/path/to/file
</code></pre>

<p>It uses Tramp module to do the same thing.</p>

<p><a href="https://rex.writeas.com/tag:vim" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vim</span></a> <a href="https://rex.writeas.com/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/write-with-sudo</guid>
      <pubDate>Sun, 12 Dec 2021 14:27:38 +0000</pubDate>
    </item>
    <item>
      <title>Limit SATA Speed</title>
      <link>https://rex.writeas.com/limit-sata-speed?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Recently we got a disk write problem during our service running.&#xA;&#xA;The dmesg log is as following:&#xA;&#xA;[08:32:30 2021] NET: Registered protocol family 38&#xA;[08:32:30 2021] EXT4-fs (dm-0): warning: mounting fs with errors, running e2fsck is recommended&#xA;[08:32:30 2021] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)&#xA;[08:32:30 2021] ata2.00: exception Emask 0x10 SAct 0x2 SErr 0x800000 action 0x6 frozen&#xA;[08:32:30 2021] ata2.00: irqstat 0x08000000, interface fatal error&#xA;[08:32:30 2021] ata2: SError: { LinkSeq }&#xA;[08:32:30 2021] ata2.00: failed command: READ FPDMA QUEUED&#xA;[08:32:30 2021] ata2.00: cmd 60/08:08:78:19:c1/00:00:12:00:00/40 tag 1 ncq dma 4096 in&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:32:30 2021] ata2.00: status: { DRDY }&#xA;[08:32:30 2021] ata2: hard resetting link&#xA;[08:32:31 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)&#xA;[08:32:31 2021] ata2.00: configured for UDMA/133&#xA;[08:32:31 2021] ata2: EH complete&#xA;[08:32:31 2021] ata2.00: Enabling discardzeroesdata&#xA;[08:33:08 2021] ata2.00: exception Emask 0x0 SAct 0xc0000 SErr 0x400001 action 0x6 frozen&#xA;[08:33:08 2021] ata2: SError: { RecovData Handshk }&#xA;[08:33:08 2021] ata2.00: failed command: READ FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 60/08:90:00:1d:c1/00:00:12:00:00/40 tag 18 ncq dma 4096 in&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 61/08:98:00:18:c4/00:00:6f:00:00/40 tag 19 ncq dma 4096 out&#xA;                         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2: hard resetting link&#xA;[08:33:08 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)&#xA;[08:33:08 2021] ata2.00: configured for UDMA/133&#xA;[08:33:08 2021] ata2.00: device reported invalid CHS sector 0&#xA;[08:33:08 2021] ata2: EH complete&#xA;[08:33:08 2021] ata2.00: Enabling discardzeroesdata&#xA;[08:33:08 2021] ata2.00: exception Emask 0x10 SAct 0xc00000 SErr 0x400100 action 0x6 frozen&#xA;[08:33:08 2021] ata2.00: irqstat 0x08000000, interface fatal error&#xA;[08:33:08 2021] ata2: SError: { UnrecovData Handshk }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 61/08:b0:00:18:c4/00:00:6f:00:00/40 tag 22 ncq dma 4096 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2.00: failed command: READ FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 60/08:b8:00:1d:c1/00:00:12:00:00/40 tag 23 ncq dma 4096 in&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2: hard resetting link&#xA;[08:33:08 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)&#xA;[08:33:08 2021] ata2.00: configured for UDMA/133&#xA;[08:33:08 2021] ata2: EH complete&#xA;[08:33:08 2021] ata2.00: Enabling discardzeroesdata&#xA;[08:33:08 2021] ata2: limiting SATA link speed to 1.5 Gbps&#xA;[08:33:08 2021] ata2.00: exception Emask 0x10 SAct 0x3fc SErr 0x400100 action 0x6 frozen&#xA;[08:33:08 2021] ata2.00: irqstat 0x08000000, interface fatal error&#xA;[08:33:08 2021] ata2: SError: { UnrecovData Handshk }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 61/30:10:10:18:c4/00:00:6f:00:00/40 tag 2 ncq dma 24576 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 61/18:18:40:18:c4/00:00:6f:00:00/40 tag 3 ncq dma 12288 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:08 2021] ata2.00: cmd 61/08:20:60:18:c4/00:00:6f:00:00/40 tag 4 ncq dma 4096 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:08 2021] ata2.00: status: { DRDY }&#xA;[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:09 2021] ata2.00: cmd 61/08:28:68:18:c4/00:00:6f:00:00/40 tag 5 ncq dma 4096 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:09 2021] ata2.00: status: { DRDY }&#xA;[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:09 2021] ata2.00: cmd 61/08:30:58:18:c4/00:00:6f:00:00/40 tag 6 ncq dma 4096 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:09 2021] ata2.00: status: { DRDY }&#xA;[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:09 2021] ata2.00: cmd 61/38:38:70:18:c4/00:00:6f:00:00/40 tag 7 ncq dma 28672 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:09 2021] ata2.00: status: { DRDY }&#xA;[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:09 2021] ata2.00: cmd 61/08:40:a8:18:c4/00:00:6f:00:00/40 tag 8 ncq dma 4096 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:09 2021] ata2.00: status: { DRDY }&#xA;[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED&#xA;[08:33:09 2021] ata2.00: cmd 61/20:48:b8:18:c4/00:00:6f:00:00/40 tag 9 ncq dma 16384 out&#xA;                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)&#xA;[08:33:09 2021] ata2.00: status: { DRDY }&#xA;[08:33:09 2021] ata2: hard resetting link&#xA;[08:33:09 2021] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)&#xA;[08:33:09 2021] ata2.00: configured for UDMA/133&#xA;[08:33:09 2021] ata2: EH complete&#xA;[08:33:09 2021] ata2.00: Enabling discardzeroesdata&#xA;&#xA;The default speed for SATA is 6.0 Gbps, but during the device running, something hardware problem happens, and the original speed is not met.&#xA;&#xA;After several handshakes, the speed is limited to 1.5 Gbps.&#xA;&#xA;The whole procedure is normal for disk problem, but it takes 39 seconds(from 08:32:30 to 08:33:09), and during this time, the disk is blocked and programs can&#39;t write data to the disk.&#xA;&#xA;It certainly is a hardware problem, maybe caused by some dust in the hard disk interface or due to violent vibration, but how can we mitigate this problem in the system level?&#xA;&#xA;We checked the normal write speed of the disk and found the lowest speed(1.5Gbps) is enough for our usage. So the simple way is the limit the SATA speed at this speed to reduce the handshake times when hardware problem happends.&#xA;&#xA;To implement this limit we can add libata.force option to kernel:&#xA;&#xA;GRUBCMDLINE_LINUX=&#34;libata.force=1.5&#34;&#xA;&#xA;#sata #linux #disk]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently we got a disk write problem during our service running.</p>

<p>The dmesg log is as following:</p>

<pre><code class="language-log">[08:32:30 2021] NET: Registered protocol family 38
[08:32:30 2021] EXT4-fs (dm-0): warning: mounting fs with errors, running e2fsck is recommended
[08:32:30 2021] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[08:32:30 2021] ata2.00: exception Emask 0x10 SAct 0x2 SErr 0x800000 action 0x6 frozen
[08:32:30 2021] ata2.00: irq_stat 0x08000000, interface fatal error
[08:32:30 2021] ata2: SError: { LinkSeq }
[08:32:30 2021] ata2.00: failed command: READ FPDMA QUEUED
[08:32:30 2021] ata2.00: cmd 60/08:08:78:19:c1/00:00:12:00:00/40 tag 1 ncq dma 4096 in
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:32:30 2021] ata2.00: status: { DRDY }
[08:32:30 2021] ata2: hard resetting link
[08:32:31 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[08:32:31 2021] ata2.00: configured for UDMA/133
[08:32:31 2021] ata2: EH complete
[08:32:31 2021] ata2.00: Enabling discard_zeroes_data
[08:33:08 2021] ata2.00: exception Emask 0x0 SAct 0xc0000 SErr 0x400001 action 0x6 frozen
[08:33:08 2021] ata2: SError: { RecovData Handshk }
[08:33:08 2021] ata2.00: failed command: READ FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 60/08:90:00:1d:c1/00:00:12:00:00/40 tag 18 ncq dma 4096 in
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 61/08:98:00:18:c4/00:00:6f:00:00/40 tag 19 ncq dma 4096 out
                         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2: hard resetting link
[08:33:08 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[08:33:08 2021] ata2.00: configured for UDMA/133
[08:33:08 2021] ata2.00: device reported invalid CHS sector 0
[08:33:08 2021] ata2: EH complete
[08:33:08 2021] ata2.00: Enabling discard_zeroes_data
[08:33:08 2021] ata2.00: exception Emask 0x10 SAct 0xc00000 SErr 0x400100 action 0x6 frozen
[08:33:08 2021] ata2.00: irq_stat 0x08000000, interface fatal error
[08:33:08 2021] ata2: SError: { UnrecovData Handshk }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 61/08:b0:00:18:c4/00:00:6f:00:00/40 tag 22 ncq dma 4096 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2.00: failed command: READ FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 60/08:b8:00:1d:c1/00:00:12:00:00/40 tag 23 ncq dma 4096 in
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2: hard resetting link
[08:33:08 2021] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[08:33:08 2021] ata2.00: configured for UDMA/133
[08:33:08 2021] ata2: EH complete
[08:33:08 2021] ata2.00: Enabling discard_zeroes_data
[08:33:08 2021] ata2: limiting SATA link speed to 1.5 Gbps
[08:33:08 2021] ata2.00: exception Emask 0x10 SAct 0x3fc SErr 0x400100 action 0x6 frozen
[08:33:08 2021] ata2.00: irq_stat 0x08000000, interface fatal error
[08:33:08 2021] ata2: SError: { UnrecovData Handshk }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 61/30:10:10:18:c4/00:00:6f:00:00/40 tag 2 ncq dma 24576 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 61/18:18:40:18:c4/00:00:6f:00:00/40 tag 3 ncq dma 12288 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:08 2021] ata2.00: cmd 61/08:20:60:18:c4/00:00:6f:00:00/40 tag 4 ncq dma 4096 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:08 2021] ata2.00: status: { DRDY }
[08:33:08 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:09 2021] ata2.00: cmd 61/08:28:68:18:c4/00:00:6f:00:00/40 tag 5 ncq dma 4096 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:09 2021] ata2.00: status: { DRDY }
[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:09 2021] ata2.00: cmd 61/08:30:58:18:c4/00:00:6f:00:00/40 tag 6 ncq dma 4096 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:09 2021] ata2.00: status: { DRDY }
[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:09 2021] ata2.00: cmd 61/38:38:70:18:c4/00:00:6f:00:00/40 tag 7 ncq dma 28672 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:09 2021] ata2.00: status: { DRDY }
[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:09 2021] ata2.00: cmd 61/08:40:a8:18:c4/00:00:6f:00:00/40 tag 8 ncq dma 4096 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:09 2021] ata2.00: status: { DRDY }
[08:33:09 2021] ata2.00: failed command: WRITE FPDMA QUEUED
[08:33:09 2021] ata2.00: cmd 61/20:48:b8:18:c4/00:00:6f:00:00/40 tag 9 ncq dma 16384 out
                         res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[08:33:09 2021] ata2.00: status: { DRDY }
[08:33:09 2021] ata2: hard resetting link
[08:33:09 2021] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[08:33:09 2021] ata2.00: configured for UDMA/133
[08:33:09 2021] ata2: EH complete
[08:33:09 2021] ata2.00: Enabling discard_zeroes_data
</code></pre>

<p>The default speed for SATA is 6.0 Gbps, but during the device running, something hardware problem happens, and the original speed is not met.</p>

<p>After several handshakes, the speed is limited to 1.5 Gbps.</p>

<p>The whole procedure is normal for disk problem, but it takes <strong>39</strong> seconds(from 08:32:30 to 08:33:09), and during this time, the disk is blocked and programs can&#39;t write data to the disk.</p>

<p>It certainly is a hardware problem, maybe caused by some dust in the hard disk interface or due to violent vibration, but how can we mitigate this problem in the system level?</p>

<p>We checked the normal write speed of the disk and found the lowest speed(1.5Gbps) is enough for our usage. So the simple way is the limit the SATA speed at this speed to reduce the handshake times when hardware problem happends.</p>

<p>To implement this limit we can add <a href="https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt" rel="nofollow"><strong>libata.force</strong></a> option to kernel:</p>

<pre><code class="language-conf">GRUB_CMDLINE_LINUX=&#34;libata.force=1.5&#34;
</code></pre>

<p><a href="https://rex.writeas.com/tag:sata" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">sata</span></a> <a href="https://rex.writeas.com/tag:linux" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">linux</span></a> <a href="https://rex.writeas.com/tag:disk" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">disk</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/limit-sata-speed</guid>
      <pubDate>Sun, 21 Nov 2021 05:26:20 +0000</pubDate>
    </item>
    <item>
      <title>Use Overlay Filesystem on Ubuntu</title>
      <link>https://rex.writeas.com/use-overlay-filesystem-on-ubuntu?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[How to use&#xA;How to disable&#xA;  Remount&#xA;  Use overlayroot-chroot&#xA;  Disable OverlayFS when booting&#xA;  Disable by overlayroot.conf&#xA;&#xA;a id=&#34;org08f1a6b&#34;/a&#xA;&#xA;How to use&#xA;&#xA;We have a device running Ubuntu and will be powered off directly without shutdown gracefully.&#xA;&#xA;In order to keep the file system from damage, we use the OverlayFS provided in Linux kernel since 3.18.&#xA;&#xA;The usage of OverlayFS is very simple, as following:&#xA;&#xA;first install overlayroot package&#xA;$ sudo apt-get install overlayroot&#xA;&#xA;second, change the config file /etc/overlayroot.conf&#xA;the simple config is as following&#xA;we enable swap, and disable recurse overlay&#xA;$ cat /etc/overlayroot.conf&#xA;overlayroot=&#34;tmpfs:swap=1,recurse=0&#34;&#xA;&#xA;After rebooting, we should see something like this:&#xA;&#xA;$ df -h&#xA;Filesystem              Size  Used Avail Use% Mounted on&#xA;udev                     16G  8.0K   16G   1% /dev&#xA;tmpfs                   3.1G   74M  3.1G   3% /run&#xA;/dev/sda3                96G   16G   76G  17% /media/root-ro&#xA;tmpfs-root               16G   60M   16G   1% /media/root-rw&#xA;overlayroot              16G   60M   16G   1% /&#xA;tmpfs                    16G   24K   16G   1% /dev/shm&#xA;tmpfs                   5.0M  4.0K  5.0M   1% /run/lock&#xA;tmpfs                    16G     0   16G   0% /sys/fs/cgroup&#xA;&#xA;$ mount&#xA;[...]&#xA;configfs on /sys/kernel/config type configfs (rw,relatime)&#xA;overlayroot on /var/cache/apt/archives type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/)&#xA;overlayroot on /opt/var/cache/apt/archives type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/)&#xA;overlayroot on /var/lib/apt/lists type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/)&#xA;overlayroot on /opt/var/lib/apt/lists type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/)&#xA;&#xA;a id=&#34;orgf9f1ab8&#34;/a&#xA;&#xA;How to disable&#xA;&#xA;When you change some file under OverlayFS, and after the reboot, the file will keep the same.&#xA;&#xA;But sometimes you do want to change the original file, how to disable this feature?&#xA;&#xA;We have these methods:&#xA;&#xA;Remount the disk with rw, and change the file under lowerdir&#xA;Use overlayroot-chroot tool provided by the package&#xA;Disable OverlayFS when booting&#xA;Disable by overlayroot.conf&#xA;&#xA;a id=&#34;orgdd61966&#34;/a&#xA;&#xA;Remount&#xA;&#xA;If you just want to change some files, this is very direct, remount the block device and change the file under lowerdir.&#xA;&#xA;remount with read-write&#xA;$ sudo mount -o remount,rw /dev/sda3&#xA;&#xA;say we want to change overlayroot.conf&#xA;note: we must change under the file under the lowerdir: /media/root-ro&#xA;$ sudo vim /media/root-ro/etc/overlayroot.conf&#xA;&#xA;remount with read-only&#xA;$ sudo mount -o remount,ro /dev/sda3&#xA;&#xA;a id=&#34;orgd65b959&#34;/a&#xA;&#xA;Use overlayroot-chroot&#xA;&#xA;If we want to install some package and keep the package after reboot, we can&#39;t use the first method, since the package may change many files under different directories.&#xA;&#xA;We still have a simple way, you run overlayroot-chroot with root, make changes, and the changes will be saved after reboot.&#xA;&#xA;$ sudo overlayroot-chroot&#xA;&#xA;The change may not take effect immediately after exit the command, you can mount again like this:&#xA;&#xA;$ sudo mount -o remount /&#xA;&#xA;a id=&#34;org5f572fb&#34;/a&#xA;&#xA;Disable OverlayFS when booting&#xA;&#xA;The overlayroot-chroot method may solve 90% of the problem, but it do has some limitation.&#xA;&#xA;The overlayroot-chroot just like chroot into the lower filesystem, and remount with writable.&#xA;&#xA;If you have some scripts-say postinstall in some package, checks the chroot mode, it may refuse to execute under this case.&#xA;&#xA;To fix this problem, we can disable OverlayFS during the booting phase.&#xA;&#xA;We can edit the boot command line, append overlayroot=disabled and boot again.&#xA;&#xA;  linux /vmlinuz-4.15.0-123-lowlatency root=UUID=bfb40993-3xxxx ro systemd.unit=multi-user.target overlayroot=disabled&#xA;&#xA;Under this case, the OverlayFS will be disabled completely.&#xA;&#xA;a id=&#34;orga277402&#34;/a&#xA;&#xA;Disable by overlayroot.conf&#xA;&#xA;We can disable OverlayFS during the boot time, but if we want to keep it disabled after several reboots, we have a simple way.&#xA;&#xA;You can change the overlayroot.conf config file using the remount method, and comment all the lines, and then reboot again.&#xA;&#xA;If we want to enable, we can remount and un-comment the config lines, and after reboot, the OverlayFS will be enabled agian.&#xA;&#xA;$ cat /media/root-ro/etc/overlayroot.conf&#xA;overlayroot=&#34;tmpfs:swap=1,recurse=0&#34;&#xA;&#xA;#overlayfs #linux]]&gt;</description>
      <content:encoded><![CDATA[<ul><li><a href="#org08f1a6b" rel="nofollow">How to use</a></li>
<li><a href="#orgf9f1ab8" rel="nofollow">How to disable</a>
<ul><li><a href="#orgdd61966" rel="nofollow">Remount</a></li>
<li><a href="#orgd65b959" rel="nofollow">Use overlayroot-chroot</a></li>
<li><a href="#org5f572fb" rel="nofollow">Disable OverlayFS when booting</a></li>
<li><a href="#orga277402" rel="nofollow">Disable by overlayroot.conf</a></li></ul></li></ul>

<p><a id="org08f1a6b" id="org08f1a6b"></a></p>

<h1 id="how-to-use" id="how-to-use">How to use</h1>

<p>We have a device running Ubuntu and will be powered off directly without shutdown gracefully.</p>

<p>In order to keep the file system from damage, we use the <a href="https://en.wikipedia.org/wiki/OverlayFS" rel="nofollow">OverlayFS</a> provided in Linux kernel since 3.18.</p>

<p>The usage of <strong>OverlayFS</strong> is very simple, as following:</p>

<pre><code class="language-shell"># first install overlayroot package
$ sudo apt-get install overlayroot

# second, change the config file /etc/overlayroot.conf
# the simple config is as following
# we enable swap, and disable recurse overlay
$ cat /etc/overlayroot.conf
overlayroot=&#34;tmpfs:swap=1,recurse=0&#34;
</code></pre>

<p>After rebooting, we should see something like this:</p>

<pre><code class="language-shell">$ df -h
Filesystem              Size  Used Avail Use% Mounted on
udev                     16G  8.0K   16G   1% /dev
tmpfs                   3.1G   74M  3.1G   3% /run
/dev/sda3                96G   16G   76G  17% /media/root-ro
tmpfs-root               16G   60M   16G   1% /media/root-rw
overlayroot              16G   60M   16G   1% /
tmpfs                    16G   24K   16G   1% /dev/shm
tmpfs                   5.0M  4.0K  5.0M   1% /run/lock
tmpfs                    16G     0   16G   0% /sys/fs/cgroup

$ mount
[...]
configfs on /sys/kernel/config type configfs (rw,relatime)
overlayroot on /var/cache/apt/archives type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/_)
overlayroot on /opt/var/cache/apt/archives type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/_)
overlayroot on /var/lib/apt/lists type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/_)
overlayroot on /opt/var/lib/apt/lists type overlay (rw,relatime,lowerdir=/media/root-ro,upperdir=/media/root-rw/overlay,workdir=/media/root-rw/overlay-workdir/_)
</code></pre>

<p><a id="orgf9f1ab8" id="orgf9f1ab8"></a></p>

<h1 id="how-to-disable" id="how-to-disable">How to disable</h1>

<p>When you change some file under <strong>OverlayFS</strong>, and after the reboot, the file will keep the same.</p>

<p>But sometimes you do want to change the original file, how to disable this feature?</p>

<p>We have these methods:</p>
<ul><li>Remount the disk with rw, and change the file under lowerdir</li>
<li>Use <a href="http://manpages.ubuntu.com/manpages/impish/man8/overlayroot-chroot.8.html" rel="nofollow">overlayroot-chroot</a> tool provided by the package</li>
<li>Disable OverlayFS when booting</li>
<li>Disable by overlayroot.conf</li></ul>

<p><a id="orgdd61966" id="orgdd61966"></a></p>

<h2 id="remount" id="remount">Remount</h2>

<p>If you just want to change some files, this is very direct, remount the block device and change the file under lowerdir.</p>

<pre><code class="language-shell"># remount with read-write
$ sudo mount -o remount,rw /dev/sda3

# say we want to change overlayroot.conf
# note: we must change under the file under the lowerdir: /media/root-ro
$ sudo vim /media/root-ro/etc/overlayroot.conf

# remount with read-only
$ sudo mount -o remount,ro /dev/sda3
</code></pre>

<p><a id="orgd65b959" id="orgd65b959"></a></p>

<h2 id="use-overlayroot-chroot" id="use-overlayroot-chroot">Use overlayroot-chroot</h2>

<p>If we want to install some package and keep the package after reboot, we can&#39;t use the first method, since the package may change many files under different directories.</p>

<p>We still have a simple way, you run <strong>overlayroot-chroot</strong> with root, make changes, and the changes will be saved after reboot.</p>

<pre><code class="language-shell">$ sudo overlayroot-chroot
</code></pre>

<p>The change may not take effect immediately after exit the command, you can mount again like this:</p>

<pre><code class="language-shell">$ sudo mount -o remount /
</code></pre>

<p><a id="org5f572fb" id="org5f572fb"></a></p>

<h2 id="disable-overlayfs-when-booting" id="disable-overlayfs-when-booting">Disable OverlayFS when booting</h2>

<p>The overlayroot-chroot method may solve 90% of the problem, but it do has some limitation.</p>

<p>The <strong>overlayroot-chroot</strong> just like chroot into the lower filesystem, and remount with writable.</p>

<p>If you have some scripts-say postinstall in some package, checks the chroot mode, it may refuse to execute under this case.</p>

<p>To fix this problem, we can disable <strong>OverlayFS</strong> during the booting phase.</p>

<p>We can edit the boot command line, append <strong>overlayroot=disabled</strong> and boot again.</p>

<blockquote><p>linux /vmlinuz-4.15.0-123-lowlatency root=UUID=bfb40993-3xxxx ro systemd.unit=multi-user.target <strong>overlayroot=disabled</strong></p></blockquote>

<p>Under this case, the <strong>OverlayFS</strong> will be disabled completely.</p>

<p><a id="orga277402" id="orga277402"></a></p>

<h2 id="disable-by-overlayroot-conf" id="disable-by-overlayroot-conf">Disable by overlayroot.conf</h2>

<p>We can disable <strong>OverlayFS</strong> during the boot time, but if we want to keep it disabled after several reboots, we have a simple way.</p>

<p>You can change the <strong>overlayroot.conf</strong> config file using the remount method, and comment all the lines, and then reboot again.</p>

<p>If we want to enable, we can remount and un-comment the config lines, and after reboot, the <strong>OverlayFS</strong> will be enabled agian.</p>

<pre><code class="language-shell">$ cat /media/root-ro/etc/overlayroot.conf
# overlayroot=&#34;tmpfs:swap=1,recurse=0&#34;
</code></pre>

<p><a href="https://rex.writeas.com/tag:overlayfs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">overlayfs</span></a> <a href="https://rex.writeas.com/tag:linux" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">linux</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/use-overlay-filesystem-on-ubuntu</guid>
      <pubDate>Sun, 21 Nov 2021 04:29:06 +0000</pubDate>
    </item>
    <item>
      <title>Use shell commands in Makefile</title>
      <link>https://rex.writeas.com/use-shell-commands-in-makefile-j0df?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have several related packages in different git repositories, and each repository has several branches. It is a headache to package all of these packages during release period.&#xA;&#xA;So I decided to create a new repository including all of the packages, and package the single branch at one time.&#xA;&#xA;The simple project structure, we have several packages in pkg directory, and each has its own build.sh.&#xA;&#xA;Since it is very simple, don&#39;t want to just add one Makefile for the whole thing, the content is like:&#xA;&#xA;CWD:=$(shell pwd)&#xA;BUILD:=$(CWD)/build&#xA;PACKAGE_NAME:= app-$(shell date &#34;+%Y%m%d%H%M&#34;).tar.gz&#xA;&#xA;package:&#xA;        for pkg in $(ls -1 $(CWD)/pkgs); do \&#xA;                echo &#34;### starting build package: $pkg...&#34;; \&#xA;                $(CWD)/pkgs/$pkg/build.sh  $(BUILD); \&#xA;                echo &#34;### finish build package $pkg&#34;; \&#xA;                echo ; \&#xA;        done&#xA;&#xA;And after I run make, I got no success.&#xA;&#xA;for pkg in ; do \&#xA;        echo &#34;### starting build package: kg...&#34;; \&#xA;        /home/xxx/app/pkgs/kg/build.sh  /home/xxx/app/build; \&#xA;        echo &#34;### finish build package kg&#34;; \&#xA;        echo ; \&#xA;done&#xA;&#xA;The ls command and $pkg are not executed/expanded correctly.&#xA;&#xA;After some search I know that the shell commands in makefile may be invoked in one shell, and the statements will be expanded twice for shell commands, so we need double dollar signsfor shell variables.&#xA;&#xA;Essentially, gmake scans the command-line for shell built-ins (like for and if) and “shell special characters” (like | and &amp;). If none of these are present in the command-line, gmake will avoid the overhead of the shell invocation by invoking the command directly (literally just using execve to run the command).&#xA;[...]&#xA;gmake expands command-lines before executing them.&#xA;&#xA;Command expansion is why you can use gmake features like variables (eg, $@) and functions (eg, $(foreach)) in the recipe. It is also why you must use double dollar signs if you want to reference shell variables in your recipe...&#xA;&#xA;The correct statement is:&#xA;&#xA;for pkg in $$(ls -1 $(CWD)/pkgs); do \&#xA;        echo &#34;### starting build package: $$pkg...&#34;; \&#xA;        $(CWD)/pkgs/$$pkg/build.sh  $(BUILD); \&#xA;        echo &#34;### finish build package $$pkg&#34;; \&#xA;        echo ; \&#xA;done&#xA;&#xA;Since CWD and BUILD are variables in Makefile, so there are referenced with single dollar signs. And ls and pkg are variables in shell, theses variables are referenced with double dollar signs.&#xA;&#xA;\#shell #makefile]]&gt;</description>
      <content:encoded><![CDATA[<p>I have several related packages in different git repositories, and each repository has several branches. It is a headache to package all of these packages during release period.</p>

<p>So I decided to create a new repository including all of the packages, and package the single branch at one time.</p>

<p>The simple project structure, we have several packages in pkg directory, and each has its own build.sh.</p>

<p>Since it is very simple, don&#39;t want to just add one Makefile for the whole thing, the content is like:</p>

<pre><code class="language-shell">CWD:=$(shell pwd)
BUILD:=$(CWD)/build
PACKAGE_NAME:= app-$(shell date &#34;+%Y%m%d%H%M&#34;).tar.gz

package:
        for pkg in $(ls -1 $(CWD)/pkgs); do \
                echo &#34;### starting build package: $pkg...&#34;; \
                $(CWD)/pkgs/$pkg/build.sh  $(BUILD); \
                echo &#34;### finish build package $pkg&#34;; \
                echo ; \
        done
</code></pre>

<p>And after I run <strong>make</strong>, I got no success.</p>

<pre><code class="language-shell">for pkg in ; do \
        echo &#34;### starting build package: kg...&#34;; \
        /home/xxx/app/pkgs/kg/build.sh  /home/xxx/app/build; \
        echo &#34;### finish build package kg&#34;; \
        echo ; \
done
</code></pre>

<p>The <strong>ls</strong> command and <strong>$pkg</strong> are not executed/expanded correctly.</p>

<p>After some <a href="https://blog.melski.net/2010/11/15/shell-commands-in-gnu-make/" rel="nofollow">search</a> I know that the shell commands in makefile may be invoked in one shell, and the statements will be expanded twice for shell commands, so we need <strong>double dollar</strong> signsfor shell variables.</p>

<pre><code class="language-log">Essentially, gmake scans the command-line for shell built-ins (like for and if) and “shell special characters” (like | and &amp;). If none of these are present in the command-line, gmake will avoid the overhead of the shell invocation by invoking the command directly (literally just using execve to run the command).
[...]
gmake expands command-lines before executing them.

Command expansion is why you can use gmake features like variables (eg, $@) and functions (eg, $(foreach)) in the recipe. It is also why you must use double dollar signs if you want to reference shell variables in your recipe...
</code></pre>

<p>The correct statement is:</p>

<pre><code class="language-shell">for pkg in $$(ls -1 $(CWD)/pkgs); do \
        echo &#34;### starting build package: $$pkg...&#34;; \
        $(CWD)/pkgs/$$pkg/build.sh  $(BUILD); \
        echo &#34;### finish build package $$pkg&#34;; \
        echo ; \
done
</code></pre>

<p>Since <strong>CWD</strong> and <strong>BUILD</strong> are variables in Makefile, so there are referenced with <strong>single</strong> dollar signs. And <strong>ls</strong> and <strong>pkg</strong> are variables in shell, theses variables are referenced with <strong>double</strong> dollar signs.</p>

<p>#shell <a href="https://rex.writeas.com/tag:makefile" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">makefile</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/use-shell-commands-in-makefile-j0df</guid>
      <pubDate>Sun, 20 Jun 2021 06:44:52 +0000</pubDate>
    </item>
    <item>
      <title>Buffer mode change</title>
      <link>https://rex.writeas.com/buffer-mode-change?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I am using ppscheck to monitor with PPS status recently, and found a problem for the tool.&#xA;&#xA;The task is quite simple. It checks the PPS status, with ppscheck, queries the output periodically, and it is done with Python subprocess.&#xA;&#xA;Sample code is like this:&#xA;&#xA;cmd = &#34;sudo ppscheck /dev/ttyS0&#34;&#xA;args = shlex.split(cmd)&#xA;proc = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;fd = proc.stdout.fileno()&#xA;&#xA;poller = select.epoll()&#xA;poller.register(fd, select.EPOLLIN)&#xA;&#xA;I create a process and register the stdout fd to epoll, but after the running the fd is not readable.&#xA;&#xA;I try to execute the command by shell, the output is normal. And then I redirect all the outputs to one file, but find the file is empty!&#xA;&#xA;running OK&#xA;$ sudo ppscheck /dev/ttyS0&#xA;&#xA;redirect all output to file, file is empty&#xA;$ sudo ppscheck /dev/ttyS0   pps.log 2  &amp;1&#xA;$ cat pps.log&#xA;$&#xA;&#xA;The first thing came to me is that the ppscheck tool buffers the output, and furthermore the mode is not line buffered.&#xA;&#xA;And after some search I find stdbuf can be use to change the buffer mode of a program.&#xA;&#xA;After add &#34;stdbuf -oL&#34; to the command, and the problem is solved.&#xA;&#xA;cmd = &#34;sudo stdbuf -oL ppscheck /dev/ttyS0&#34;&#xA;&#xA;And now I am wondering how stdbuf is implemented to achieve this goal.&#xA;&#xA;It may read the output of the program, and when receive newline and then make a flush() call? Then I realize it can&#39;t be done in this way, since the original program is buffered, you will wait until the buffer is full and produce the outputs.&#xA;&#xA;So I check the source code of stdbuf, the main code is as following:&#xA;&#xA;/ main function /&#xA;if (! setlibstdbufoptions ())&#xA;  {&#xA;    error (0, 0, (&#34;you must specify a buffering mode option&#34;));&#xA;    usage (EXITCANCELED);&#xA;  }&#xA;&#xA;/ Try to preload libstdbuf first from the same path as&#xA;   stdbuf is running from.  /&#xA;setprogrampath (programname);&#xA;if (!programpath)&#xA;  programpath = xstrdup (PKGLIBDIR);  / Need to init to non-NULL.  /&#xA;setLDPRELOAD ();&#xA;free (programpath);&#xA;&#xA;execvp (argv, argv);&#xA;&#xA;int exitstatus = errno == ENOENT ? EXITENOENT : EXITCANNOTINVOKE;&#xA;error (0, errno, (&#34;failed to run command %s&#34;), quote (argv[0]));&#xA;return exitstatus;&#xA;&#xA;/ setlibstdbufoptions /&#xA;if (stdbuf[i].optarg == &#39;L&#39;)&#xA;  ret = asprintf (&amp;var, &#34;%s%c=L&#34;, &#34;STDBUF&#34;,&#xA;                  toupper (stdbuf[i].optc));&#xA;else&#xA;  ret = asprintf (&amp;var, &#34;%s%c=%&#34; PRIuMAX, &#34;STDBUF&#34;,&#xA;                  toupper (stdbuf[i].optc),&#xA;                  (uintmaxt) stdbuf[i].size);&#xA;if (ret &lt; 0)&#xA;  xallocdie ();&#xA;&#xA;if (putenv (var) != 0)&#xA;{&#xA;  die (EXITCANCELED, errno,&#xA;       (&#34;failed to update the environment with %s&#34;),&#xA;       quote (var));&#xA;}&#xA;&#xA;The above logic is very simple, basically it only set buffer options, and then executed the program with execvp.&#xA;&#xA;The key part is how the buffer mode is set, the program use a nice way to archive this, via LD\PRELOAD trick.&#xA;&#xA;The stdbuf has a nother part called libstd.&#xA;&#xA;Stdbuf set two kinds of environment variables before run the program. The first one is the buffer mode, variables are STDBUFI, STDBUFO and STDBUFE for stdin, stdout and stderr. The second one is the LDPRELOAD environment variable, adding libstdbuf. And after libstdbuf is loaded, the buffer mode is set, the code is as following:&#xA;&#xA;/ Use attribute to avoid elision of attribute on SUNPROC etc.  /&#xA;static void _attribute ((constructor))&#xA;stdbuf (void)&#xA;{&#xA;  char emode = getenv (&#34;STDBUFE&#34;);&#xA;  char imode = getenv (&#34;STDBUFI&#34;);&#xA;  char omode = getenv (&#34;STDBUFO&#34;);&#xA;  if (emode) / Do first so can write errors to stderr  /&#xA;    applymode (stderr, emode);&#xA;  if (imode)&#xA;    applymode (stdin, imode);&#xA;  if (omode)&#xA;    applymode (stdout, omode);&#xA;}&#xA;&#xA;/ core part of applymode */&#xA;if (setvbuf (stream, buf, setvbufmode, size) != 0)&#xA;  {&#xA;    fprintf (stderr, (&#34;could not set buffering of %s to mode %s\n&#34;),&#xA;             filenotoname (fileno (stream)), mode);&#xA;    free (buf);&#xA;  }&#xA;&#xA;The _attribute ((constructor)) is the entry point of shared library in GCC.&#xA;&#xA;So when libstdbuf.so is loaded, stdbuf function is called and buffer mode is set.&#xA;&#xA;And after that, the original program is executed normally.&#xA;&#xA;\#stdbuf #libstdbuf #ld #LD\\\PRELOAD #ppscheck #coreutil]]&gt;</description>
      <content:encoded><![CDATA[<p>I am using <code>ppscheck</code> to monitor with PPS status recently, and found a problem for the tool.</p>

<p>The task is quite simple. It checks the PPS status, with <code>ppscheck</code>, queries the output periodically, and it is done with Python subprocess.</p>

<p>Sample code is like this:</p>

<pre><code class="language-python">cmd = &#34;sudo ppscheck /dev/ttyS0&#34;
args = shlex.split(cmd)
proc = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
fd = proc.stdout.fileno()

poller = select.epoll()
poller.register(fd, select.EPOLLIN)
</code></pre>

<p>I create a process and register the stdout fd to epoll, but after the running the fd is not readable.</p>

<p>I try to execute the command by shell, the output is normal. And then I redirect all the outputs to one file, but find the file is empty!</p>

<pre><code class="language-shell"># running OK
$ sudo ppscheck /dev/ttyS0

# redirect all output to file, file is empty
$ sudo ppscheck /dev/ttyS0 &gt; pps.log 2&gt;&amp;1
$ cat pps.log
$
</code></pre>

<p>The first thing came to me is that the <code>ppscheck</code> tool buffers the output, and furthermore the mode is not line buffered.</p>

<p>And after some search I find <code>stdbuf</code> can be use to change the buffer mode of a program.</p>

<p>After add “stdbuf -oL” to the command, and the problem is solved.</p>

<pre><code class="language-python">cmd = &#34;sudo stdbuf -oL ppscheck /dev/ttyS0&#34;
</code></pre>

<p>And now I am wondering how <code>stdbuf</code> is implemented to achieve this goal.</p>

<p>It may read the output of the program, and when receive newline and then make a flush() call? Then I realize it can&#39;t be done in this way, since the original program is buffered, you will wait until the buffer is full and produce the outputs.</p>

<p>So I check the source code of <a href="https://github.com/coreutils/coreutils/blob/00ea4bacf6063ccc125209d5186f8f2382c6f0d4/src/stdbuf.c" rel="nofollow">stdbuf</a>, the main code is as following:</p>

<pre><code class="language-c">/* main function */
if (! set_libstdbuf_options ())
  {
    error (0, 0, _(&#34;you must specify a buffering mode option&#34;));
    usage (EXIT_CANCELED);
  }

/* Try to preload libstdbuf first from the same path as
   stdbuf is running from.  */
set_program_path (program_name);
if (!program_path)
  program_path = xstrdup (PKGLIBDIR);  /* Need to init to non-NULL.  */
set_LD_PRELOAD ();
free (program_path);

execvp (*argv, argv);

int exit_status = errno == ENOENT ? EXIT_ENOENT : EXIT_CANNOT_INVOKE;
error (0, errno, _(&#34;failed to run command %s&#34;), quote (argv[0]));
return exit_status;

/* set_libstdbuf_options */
if (*stdbuf[i].optarg == &#39;L&#39;)
  ret = asprintf (&amp;var, &#34;%s%c=L&#34;, &#34;_STDBUF_&#34;,
                  toupper (stdbuf[i].optc));
else
  ret = asprintf (&amp;var, &#34;%s%c=%&#34; PRIuMAX, &#34;_STDBUF_&#34;,
                  toupper (stdbuf[i].optc),
                  (uintmax_t) stdbuf[i].size);
if (ret &lt; 0)
  xalloc_die ();

if (putenv (var) != 0)
{
  die (EXIT_CANCELED, errno,
       _(&#34;failed to update the environment with %s&#34;),
       quote (var));
}
</code></pre>

<p>The above logic is very simple, basically it only set buffer options, and then executed the program with <code>execvp</code>.</p>

<p>The key part is how the buffer mode is set, the program use a nice way to archive this, via <a href="https://www.baeldung.com/linux/ld_preload-trick-what-is" rel="nofollow">LD_PRELOAD</a> trick.</p>

<p>The stdbuf has a nother part called libstd.</p>

<p>Stdbuf set two kinds of environment variables before run the program. The first one is the buffer mode, variables are <code>_STDBUF_I</code>, <code>_STDBUF_O</code> and <code>_STDBUF_E</code> for stdin, stdout and stderr. The second one is the <code>LD_PRELOAD</code> environment variable, adding <code>libstdbuf</code>. And after <code>libstdbuf</code> is loaded, the buffer mode is set, the code is as following:</p>

<pre><code class="language-c">/* Use __attribute to avoid elision of __attribute__ on SUNPRO_C etc.  */
static void __attribute ((constructor))
stdbuf (void)
{
  char *e_mode = getenv (&#34;_STDBUF_E&#34;);
  char *i_mode = getenv (&#34;_STDBUF_I&#34;);
  char *o_mode = getenv (&#34;_STDBUF_O&#34;);
  if (e_mode) /* Do first so can write errors to stderr  */
    apply_mode (stderr, e_mode);
  if (i_mode)
    apply_mode (stdin, i_mode);
  if (o_mode)
    apply_mode (stdout, o_mode);
}

/* core part of apply_mode */
if (setvbuf (stream, buf, setvbuf_mode, size) != 0)
  {
    fprintf (stderr, _(&#34;could not set buffering of %s to mode %s\n&#34;),
             fileno_to_name (fileno (stream)), mode);
    free (buf);
  }
</code></pre>

<p>The <code>__attribute ((constructor))</code> is the entry point of shared library in GCC.</p>

<p>So when <code>libstdbuf.so</code> is loaded, <code>stdbuf</code> function is called and buffer mode is set.</p>

<p>And after that, the original program is executed normally.</p>

<p>#stdbuf <a href="https://rex.writeas.com/tag:libstdbuf" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">libstdbuf</span></a> <a href="https://rex.writeas.com/tag:ld" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ld</span></a> <a href="https://rex.writeas.com/tag:LD" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">LD</span></a>\_PRELOAD <a href="https://rex.writeas.com/tag:ppscheck" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ppscheck</span></a> <a href="https://rex.writeas.com/tag:coreutil" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">coreutil</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/buffer-mode-change</guid>
      <pubDate>Sun, 17 Jan 2021 07:12:47 +0000</pubDate>
    </item>
    <item>
      <title>Quick introduction about Expect</title>
      <link>https://rex.writeas.com/quick-introduction-about-expect?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The first import thing is how to debug&#xA;Expect with pattern and actions&#xA;Spawn new process&#xA;Interact with spawn process and continue&#xA;Signal handling&#xA;&#xA;Expect is a very useful tool to automate work with interactive applications, while also it is a very old. It is based on Tcl language which was created at 1988, and is not seen much nowdays except in test domain.&#xA;&#xA;The expect script is written with Tcl, and the syntax is a little wired compared with other popular language nowdays.&#xA;&#xA;The only scenario I used expect many years ago was to automatic ssh login. And after I know how to use SSH key to login, I abandoned it.&#xA;&#xA;I came up to expect recently because of the same scenario, SSH login. I need to ssh jump several times to reach the target server, and due to the system restriction, the ssh key can&#39;t be saved permently and will lost after reboot.&#xA;&#xA;The scenario is a little complicated compared with my old case. In order to know how to better write the expect script, I read the Exploring Expect: A Tcl-based Toolkit for Automating Interactive Programs book. The book is nice-written and worth reading if you want to learn little about Tcl or expect.&#xA;&#xA;Here some tips I learn from the book.&#xA;&#xA;a id=&#34;orgc6947f8&#34;/a&#xA;&#xA;The first import thing is how to debug&#xA;&#xA;The simple ways is to use -d option. Here are several ways:&#xA;&#xA;1: add -d with expect command&#xA;$ expect -d sample.exp&#xA;&#xA;2: add -d at the first line of expect script&#xA;!/usr/bin/env expect -d&#xA;&#xA;3: add expinternal 1 in the script&#xA;spawn telnet abc.net&#xA;expinternal 1&#xA;&#xA;expect &#34;Login: &#34;&#xA;send &#34;don\r&#34;&#xA;expect &#34;Password: &#34;&#xA;send &#34;swordfish\r&#34;&#xA;&#xA;4: you can also use -D 1 option for expect to trigger gdb liked debugger&#xA;$ expect -D1 sample.exp&#xA;1: expect &#34;hi\n&#34;&#xA;&#xA;dbg1.0  # 5: use strace level to print statments before excuted&#xA;like the set -x in shell&#xA;expect -c &#34;strace 4&#34; sample.exp&#xA;&#xA;a id=&#34;orge0c1855&#34;/a&#xA;&#xA;Expect with pattern and actions&#xA;&#xA;You can use expect with may patterns and actions, just like switch in C:&#xA;&#xA;expect {&#xA;    &#34;hi&#34; { send &#34;You said hi\n&#34;}&#xA;    &#34;hello&#34; { send &#34;Hello yourself\n&#34;}&#xA;    &#34;bye&#34; { send &#34;That was unexpected\n&#34;}&#xA;&#xA;    # a special pattern default(without quotation) is for timeout and EOF&#xA;    default {send &#34;timeout or eof\n&#34;}&#xA;}&#xA;&#xA;Expect command support both globing and regex for pattern matching. The options are -gl- and -re, the default is globing(-gl).&#xA;&#xA;The matched string is saved in expectout(0,string), and any matching and previously unmatched output is saved in variable expectout(buffer).&#xA;&#xA;The command expcontinue allows expect itself to continue executing rather than returning as it normally would.&#xA;&#xA;a id=&#34;org559106b&#34;/a&#xA;&#xA;Spawn new process&#xA;&#xA;You can spawn command to create new process like this:&#xA;&#xA;spawn ftp abc.net&#xA;expect &#34;Name&#34;&#xA;send &#34;anonymous\r&#34;&#xA;expect &#34;Password:&#34;&#xA;send &#34;don@libes.com\r&#34;&#xA;&#xA;But if the command is dynamic, a variable for example, you need to use with eval command. Eval in expect is like eval in shell, it will expand the variable and execute the command.&#xA;&#xA;!/usr/local/bin/expect --&#xA;set timeout [lindex $argv 0]&#xA;&#xA;spawn the command from argv with eval&#xA;eval spawn [lrange $argv 1 end]&#xA;expect&#xA;&#xA;a id=&#34;org26fcc8a&#34;/a&#xA;&#xA;Interact with spawn process and continue&#xA;&#xA;The simple usage for the interact it to return the control to the user.&#xA;&#xA;Actually interact provides the functions like expect with patterns and actions.&#xA;&#xA;Simple example is:&#xA;&#xA;spawn ftp abc.net&#xA;...&#xA;&#xA;interact {&#xA;    &#34;~d&#34;        {puts [exec date]}&#xA;    &#34;~e&#34;        exit&#xA;    &#34;foo&#34;       {puts &#34;bar&#34;}&#xA;}&#xA;&#xA;When use input &#34;~d&#34;, date command will be executed, and the result is echoed, and so on.&#xA;&#xA;It also provides the function to break or continue execution like this:&#xA;&#xA;while {1} {&#xA;    interact &#34;+&#34; break &#34;-&#34; continue&#xA;}&#xA;&#xA;In the above loop, if a user presses &#34;+“, the interact returns and the loop breaks. If the &#34;-&#34; is pressed, the interact returns, and the while loop continues.&#xA;&#xA;a id=&#34;org84635ad&#34;/a&#xA;&#xA;Signal handling&#xA;&#xA;The trap is used to handle singal, simple example is like this:&#xA;&#xA;trap intproc SIGINT&#xA;trap {&#xA;    senduser &#34;bye bye&#34;&#xA;    exit&#xA;} SIGINT&#xA;&#xA;And a special singal SIGWCH is for window size change, the handler is like this:&#xA;&#xA;trap {&#xA;    set rows [stty rows]&#xA;    set cols [stty columns]&#xA;    stty rows $rows columns $cols &lt; $spawn_out(slave,name)&#xA;} WINCH&#xA;&#xA;\#tcl #expect #ssh]]&gt;</description>
      <content:encoded><![CDATA[<ul><li><a href="#orgc6947f8" rel="nofollow">The first import thing is how to debug</a></li>
<li><a href="#orge0c1855" rel="nofollow">Expect with pattern and actions</a></li>
<li><a href="#org559106b" rel="nofollow">Spawn new process</a></li>
<li><a href="#org26fcc8a" rel="nofollow">Interact with spawn process and continue</a></li>
<li><a href="#org84635ad" rel="nofollow">Signal handling</a></li></ul>

<p><a href="https://man7.org/linux/man-pages/man1/expect.1.html" rel="nofollow">Expect</a> is a very useful tool to automate work with interactive applications, while also it is a very old. It is based on <a href="https://en.wikipedia.org/wiki/Tcl" rel="nofollow">Tcl</a> language which was created at 1988, and is not seen much nowdays except in test domain.</p>

<p>The expect script is written with Tcl, and the syntax is a little wired compared with other popular language nowdays.</p>

<p>The only scenario I used expect many years ago was to automatic ssh login. And after I know how to use SSH key to login, I abandoned it.</p>

<p>I came up to expect recently because of the same scenario, SSH login. I need to ssh jump several times to reach the target server, and due to the system restriction, the ssh key can&#39;t be saved permently and will lost after reboot.</p>

<p>The scenario is a little complicated compared with my old case. In order to know how to better write the expect script, I read the <a href="https://www.amazon.com/Exploring-Expect-Tcl-based-Automating-Interactive-ebook-dp-B0043D2EI6/dp/B0043D2EI6/ref=mt_other?_encoding=UTF8&amp;me=&amp;qid=" rel="nofollow">Exploring Expect: A Tcl-based Toolkit for Automating Interactive Programs</a> book. The book is nice-written and worth reading if you want to learn little about Tcl or expect.</p>

<p>Here some tips I learn from the book.</p>

<p><a id="orgc6947f8" id="orgc6947f8"></a></p>

<h1 id="the-first-import-thing-is-how-to-debug" id="the-first-import-thing-is-how-to-debug">The first import thing is how to debug</h1>

<p>The simple ways is to use <code>-d</code> option. Here are several ways:</p>

<pre><code class="language-tcl"># 1: add -d with expect command
$ expect -d sample.exp

# 2: add -d at the first line of expect script
#!/usr/bin/env expect -d

# 3: add exp_internal 1 in the script
spawn telnet abc.net
exp_internal 1

expect &#34;Login: &#34;
send &#34;don\r&#34;
expect &#34;Password: &#34;
send &#34;swordfish\r&#34;

# 4: you can also use -D 1 option for expect to trigger gdb liked debugger
$ expect -D1 sample.exp
1: expect &#34;hi\n&#34;

dbg1.0&gt;

# 5: use strace &lt;level&gt; to print statments before excuted
# like the set -x in shell
expect -c &#34;strace 4&#34; sample.exp
</code></pre>

<p><a id="orge0c1855" id="orge0c1855"></a></p>

<h1 id="expect-with-pattern-and-actions" id="expect-with-pattern-and-actions">Expect with pattern and actions</h1>

<p>You can use expect with may patterns and actions, just like <code>switch</code> in C:</p>

<pre><code class="language-tcl">expect {
    &#34;hi&#34; { send &#34;You said hi\n&#34;}
    &#34;hello&#34; { send &#34;Hello yourself\n&#34;}
    &#34;bye&#34; { send &#34;That was unexpected\n&#34;}

    # a special pattern default(without quotation) is for timeout and EOF
    default {send &#34;timeout or eof\n&#34;}
}
</code></pre>

<p>Expect command support both globing and regex for pattern matching. The options are <code>-gl-</code> and <code>-re</code>, the default is globing(<code>-gl</code>).</p>

<p>The matched string is saved in <code>expect_out(0,string)</code>, and any matching and previously unmatched output is saved in variable <code>expect_out(buffer)</code>.</p>

<p>The command <code>exp_continue</code> allows expect itself to continue executing rather than returning as it normally would.</p>

<p><a id="org559106b" id="org559106b"></a></p>

<h1 id="spawn-new-process" id="spawn-new-process">Spawn new process</h1>

<p>You can spawn command to create new process like this:</p>

<pre><code class="language-tcl">spawn ftp abc.net
expect &#34;Name&#34;
send &#34;anonymous\r&#34;
expect &#34;Password:&#34;
send &#34;don@libes.com\r&#34;
</code></pre>

<p>But if the command is dynamic, a variable for example, you need to use with <code>eval</code> command. <code>Eval</code> in expect is like <code>eval</code> in shell, it will expand the variable and execute the command.</p>

<pre><code class="language-tcl">#!/usr/local/bin/expect --
set timeout [lindex $argv 0]

# spawn the command from argv with eval
eval spawn [lrange $argv 1 end]
expect
</code></pre>

<p><a id="org26fcc8a" id="org26fcc8a"></a></p>

<h1 id="interact-with-spawn-process-and-continue" id="interact-with-spawn-process-and-continue">Interact with spawn process and continue</h1>

<p>The simple usage for the <code>interact</code> it to return the control to the user.</p>

<p>Actually <code>interact</code> provides the functions like <code>expect</code> with patterns and actions.</p>

<p>Simple example is:</p>

<pre><code class="language-tcl">spawn ftp abc.net
# ...

interact {
    &#34;~d&#34;        {puts [exec date]}
    &#34;~e&#34;        exit
    &#34;foo&#34;       {puts &#34;bar&#34;}
}
</code></pre>

<p>When use input “~d”, date command will be executed, and the result is echoed, and so on.</p>

<p>It also provides the function to break or continue execution like this:</p>

<pre><code class="language-tcl">while {1} {
    interact &#34;+&#34; break &#34;-&#34; continue
}
</code></pre>

<p>In the above loop, if a user presses “+“, the interact returns and the loop breaks. If the “–” is pressed, the interact returns, and the while loop continues.</p>

<p><a id="org84635ad" id="org84635ad"></a></p>

<h1 id="signal-handling" id="signal-handling">Signal handling</h1>

<p>The trap is used to handle singal, simple example is like this:</p>

<pre><code class="language-tcl">trap intproc SIGINT
trap {
    send_user &#34;bye bye&#34;
    exit
} SIGINT
</code></pre>

<p>And a special singal <code>SIGWCH</code> is for window size change, the handler is like this:</p>

<pre><code class="language-tcl">trap {
    set rows [stty rows]
    set cols [stty columns]
    stty rows $rows columns $cols &lt; $spawn_out(slave,name)
} WINCH
</code></pre>

<p>#tcl <a href="https://rex.writeas.com/tag:expect" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">expect</span></a> <a href="https://rex.writeas.com/tag:ssh" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ssh</span></a></p>
]]></content:encoded>
      <guid>https://rex.writeas.com/quick-introduction-about-expect</guid>
      <pubDate>Sun, 17 Jan 2021 04:01:51 +0000</pubDate>
    </item>
  </channel>
</rss>