binderRpcTest: fix OnewayCallQueueing flake
This was a problem very rarely because we were introducing an extra 50ms
delay, so a device had to be really slow in order to hit this. Anyway,
if this test client was getting descheduled, since the time it took to
issue oneway calls was discounted from our measurement of the server
time, we could report the server returning too early.
Now we include dispatch in this time, and the 50ms range is discounted.
Note, this also means that this particular test would no longer tell us
if oneway calls were accidentally serial. However, there are other tests
which check this (such as OnewayCallDoesNotWait, which can be modified
easily to support arbitrarily slow devices, even though it is
technically also race prone).
Bug: 200173589
Test: looping: binderRpcTest --gtest_filter="*OnewayCallQueu*"
Change-Id: Ie8e270c480790ceb53809279e8d2265a88fa4cb5
diff --git a/libs/binder/tests/binderRpcTest.cpp b/libs/binder/tests/binderRpcTest.cpp
index cc1d2fa..84e8ac6 100644
--- a/libs/binder/tests/binderRpcTest.cpp
+++ b/libs/binder/tests/binderRpcTest.cpp
@@ -1082,15 +1082,18 @@
EXPECT_OK(proc.rootIface->lock());
- for (size_t i = 0; i < kNumSleeps; i++) {
- // these should be processed serially
+ size_t epochMsBefore = epochMillis();
+
+ // all these *Async commands should be queued on the server sequentially,
+ // even though there are multiple threads.
+ for (size_t i = 0; i + 1 < kNumSleeps; i++) {
proc.rootIface->sleepMsAsync(kSleepMs);
}
- // should also be processesed serially
EXPECT_OK(proc.rootIface->unlockInMsAsync(kSleepMs));
- size_t epochMsBefore = epochMillis();
+ // this can only return once the final async call has unlocked
EXPECT_OK(proc.rootIface->lockUnlock());
+
size_t epochMsAfter = epochMillis();
EXPECT_GT(epochMsAfter, epochMsBefore + kSleepMs * kNumSleeps);