samedi 30 août 2014

Can a data race result in something worse than just reading a garbage value?


Vote count:

0




Both C11 and C++11 standards define that concurrent non-atomic reading and writing the same memory location is a data race which leads to an UB, thus such a program may do virtually anything. OK, got it. I want to understand the reasoning for this overly (for me today) strict requirements.


I'm playing with yet another IPC mechanism exploiting that bad racy memory access. I won't bother you with all the gore details, let me show the simplified version instead:



template <typename T>
class sval {

static_assert(std::is_trivially_copy_assignable<T>::value, "");
static_assert(std::is_trivially_destructible<T>::value, "");

static constexpr unsigned align = 64;

unsigned last;
alignas(align) std::atomic<unsigned> serial;

struct alignas(align) {
T value;
} buf[2];

public:

sval(): last(0), serial(last) {}

sval(sval &) = delete;
void operator =(sval &) = delete;

void write(const T &a) {
++last;
buf[last & 1].value = a;
serial.store(last, std::memory_order_release);
}

class reader {

const sval &sv;
unsigned last;

public:

reader(const sval &sv): sv(sv), last(sv.serial.load(std::memory_order_relaxed)) {}

bool read(T &a) {
unsigned serial = sv.serial.load(std::memory_order_acquire);

if (serial == last) {
return false;
}

for (;;) {
a = sv.buf[serial & 1].value;
unsigned check = sv.serial.load(std::memory_order_seq_cst);

if (check == serial) {
last = check;
return true;
}

serial = check;
}
}

};

};


It's a shared value with single writer and multiple readers. It contains two buffers underneath, the buf[serial & 1] is the 'sealed' one and another one may be being updated. The writer's logic is very simple and (that's the main feature) it isn't affected by readers presence and activity. But a reader has to do more to guarantee the consistency of fetched data (and this is my main question here):



  1. read the serial number

  2. read the buf[serial & 1]

  3. read the serial again and retry if it was changed


So it may read garbage data in the middle, but it does the check afterwards. Is it still possible that something bad will leak out of the read() internals? If so, what's the exact reasons either from hardware side or somewhere else?

Below is the example application. I tested both this app and my original more complex idea on x86 and ARM. But neither the tests variety nor the platforms coverage make me very confident about my ideas.



int main() {
const int N = 10000000;
sval<int> sv;
std::thread threads[2];

for (auto &t : threads) {
t = std::thread([&sv] {
sval<int>::reader r(sv);
int n = 0;

for (int i = 0, a; i < N; i = a) {
while (!r.read(a)) {
}

assert(a > i);

++n;
}

std::printf("%d\n", n);
});
}

int dummy[24];

for (int i = 1; i <= N; ++i) {
std::rotate(std::begin(dummy), dummy + 11, std::end(dummy));
sv.write(i);
}

for (auto &t : threads) {
t.join();
}
}


asked 24 secs ago







Can a data race result in something worse than just reading a garbage value?

Aucun commentaire:

Enregistrer un commentaire