190130
From 63cfdd987b1dfbf97486f0f884380faee0ae25d0 Mon Sep 17 00:00:00 2001
190130
From: Ravishankar N <ravishankar@redhat.com>
190130
Date: Wed, 4 Sep 2019 11:27:30 +0530
190130
Subject: [PATCH 416/449] tests: fix spurious failure of
190130
 bug-1402841.t-mt-dir-scan-race.t
190130
190130
Upstream patch: https://review.gluster.org/23352
190130
190130
Problem:
190130
Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts
190130
healing as soon as it is toggled from disabled to enabled. This was
190130
causing the following line in the .t to fail on a 'fast' machine (always
190130
on my laptop and sometimes on the jenkins slaves).
190130
190130
EXPECT_NOT "^0$" get_pending_heal_count $V0
190130
190130
because by the time shd was disabled, the heal was already completed.
190130
190130
Fix:
190130
Increase the no. of files to be healed and make it a variable called
190130
FILE_COUNT, should we need to bump it up further because the machines
190130
become even faster. Also created pending metadata heals to increase the
190130
time taken to heal a file.
190130
190130
>fixes: bz#1748744
190130
>Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d
190130
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
190130
190130
BUG: 1844359
190130
Change-Id: Ie3676c6c2c27e7574b958d2eaac23801dfaed3a9
190130
Reviewed-on: https://code.engineering.redhat.com/gerrit/202481
190130
Tested-by: RHGS Build Bot <nigelb@redhat.com>
190130
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
190130
---
190130
 tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t | 9 +++++----
190130
 1 file changed, 5 insertions(+), 4 deletions(-)
190130
190130
diff --git a/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t b/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t
190130
index 6351ba2..a1b9a85 100755
190130
--- a/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t
190130
+++ b/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t
190130
@@ -3,6 +3,8 @@
190130
 . $(dirname $0)/../../volume.rc
190130
 cleanup;
190130
 
190130
+FILE_COUNT=500
190130
+
190130
 TEST glusterd
190130
 TEST pidof glusterd
190130
 TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1}
190130
@@ -11,15 +13,14 @@ TEST $CLI volume set $V0 cluster.shd-wait-qlength 100
190130
 TEST $CLI volume start $V0
190130
 
190130
 TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0;
190130
-touch $M0/file{1..200}
190130
-
190130
+for i in `seq 1 $FILE_COUNT`;  do touch $M0/file$i; done
190130
 TEST kill_brick $V0 $H0 $B0/${V0}1
190130
-for i in {1..200}; do echo hello>$M0/file$i; done
190130
+for i in `seq 1 $FILE_COUNT`; do echo hello>$M0/file$i; chmod -x $M0/file$i; done
190130
 TEST $CLI volume start $V0 force
190130
 EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status $V0 $H0 $B0/${V0}1
190130
 EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1
190130
 
190130
-EXPECT "200" get_pending_heal_count $V0
190130
+EXPECT "$FILE_COUNT" get_pending_heal_count $V0
190130
 TEST $CLI volume set $V0 self-heal-daemon on
190130
 
190130
 EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status
190130
-- 
190130
1.8.3.1
190130