d84cf8
From b311385a3c4bd56d69d1fa7e9bd3d9a2ae5c344e Mon Sep 17 00:00:00 2001
d84cf8
From: Pranith Kumar K <pkarampu@redhat.com>
d84cf8
Date: Mon, 7 Oct 2019 12:27:01 +0530
d84cf8
Subject: [PATCH 403/449] Fix spurious failure in bug-1744548-heal-timeout.t
d84cf8
d84cf8
Script was assuming that the heal would have triggered
d84cf8
by the time test was executed, which may not be the case.
d84cf8
It can lead to following failures when the race happens:
d84cf8
d84cf8
...
d84cf8
18:29:45 not ok  14 [     85/      1] <  26> '[ 331 == 333 ]' -> ''
d84cf8
...
d84cf8
18:29:45 not ok  16 [  10097/      1] <  33> '[ 668 == 666 ]' -> ''
d84cf8
d84cf8
Heal on 3rd brick didn't start completely first time the command was executed.
d84cf8
So the extra count got added to the next profile info.
d84cf8
d84cf8
Fixed it by depending on cumulative stats and waiting until the count is
d84cf8
satisfied using EXPECT_WITHIN
d84cf8
d84cf8
> Upstream patch link:https://review.gluster.org/23523
d84cf8
>fixes: bz#1759002
d84cf8
>Change-Id: I3b410671c902d6b1458a757fa245613cb29d967d
d84cf8
>Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
d84cf8
d84cf8
BUG: 1764091
d84cf8
Change-Id: Ic4d16b6c8a1bbc35735567d60fd0383456b9f534
d84cf8
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
d84cf8
Reviewed-on: https://code.engineering.redhat.com/gerrit/202369
d84cf8
Tested-by: RHGS Build Bot <nigelb@redhat.com>
d84cf8
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
d84cf8
---
d84cf8
 tests/bugs/replicate/bug-1744548-heal-timeout.t | 17 +++++++++++------
d84cf8
 1 file changed, 11 insertions(+), 6 deletions(-)
d84cf8
d84cf8
diff --git a/tests/bugs/replicate/bug-1744548-heal-timeout.t b/tests/bugs/replicate/bug-1744548-heal-timeout.t
d84cf8
index 3cb73bc..0aaa3ea 100644
d84cf8
--- a/tests/bugs/replicate/bug-1744548-heal-timeout.t
d84cf8
+++ b/tests/bugs/replicate/bug-1744548-heal-timeout.t
d84cf8
@@ -4,6 +4,11 @@
d84cf8
 . $(dirname $0)/../../volume.rc
d84cf8
 . $(dirname $0)/../../afr.rc
d84cf8
 
d84cf8
+function get_cumulative_opendir_count {
d84cf8
+#sed 'n:d' prints odd-numbered lines
d84cf8
+    $CLI volume profile $V0 info |grep OPENDIR|sed 'n;d' | awk '{print $8}'|tr -d '\n'
d84cf8
+}
d84cf8
+
d84cf8
 cleanup;
d84cf8
 
d84cf8
 TEST glusterd;
d84cf8
@@ -20,23 +25,23 @@ TEST ! $CLI volume heal $V0
d84cf8
 TEST $CLI volume profile $V0 start
d84cf8
 TEST $CLI volume profile $V0 info clear
d84cf8
 TEST $CLI volume heal $V0 enable
d84cf8
-TEST $CLI volume heal $V0
d84cf8
 # Each brick does 3 opendirs, corresponding to dirty, xattrop and entry-changes
d84cf8
-COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
d84cf8
-TEST [ "$COUNT" == "333" ]
d84cf8
+EXPECT_WITHIN $HEAL_TIMEOUT "^333$" get_cumulative_opendir_count
d84cf8
 
d84cf8
 # Check that a change in heal-timeout is honoured immediately.
d84cf8
 TEST $CLI volume set $V0 cluster.heal-timeout 5
d84cf8
 sleep 10
d84cf8
-COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
d84cf8
 # Two crawls must have happened.
d84cf8
-TEST [ "$COUNT" == "666" ]
d84cf8
+EXPECT_WITHIN $HEAL_TIMEOUT "^999$" get_cumulative_opendir_count
d84cf8
 
d84cf8
 # shd must not heal if it is disabled and heal-timeout is changed.
d84cf8
 TEST $CLI volume heal $V0 disable
d84cf8
+#Wait for configuration update and any opendir fops to complete
d84cf8
+sleep 10
d84cf8
 TEST $CLI volume profile $V0 info clear
d84cf8
 TEST $CLI volume set $V0 cluster.heal-timeout 6
d84cf8
-sleep 6
d84cf8
+#Better to wait for more than 6 seconds to account for configuration updates
d84cf8
+sleep 10
d84cf8
 COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
d84cf8
 TEST [ -z $COUNT ]
d84cf8
 cleanup;
d84cf8
-- 
d84cf8
1.8.3.1
d84cf8