A critical test is to run the same controller on real robots, both indoors and outdoors. It is this future test that has the potential to either lend weight to or disprove our original theory that the simple radio model we selected is sufficient for robotics simulations. If the real robots fail indoors but succeed outdoors, we have proof that our radio model was not sufficiently predictive to be useful in controller design. On the other hand, if the tests in the real world are as uniform as in our simulator, it suggests that the radio models might be adequate. Of course, in this latter case, we have not proven anything: it could just be that this particular controller was so reliable as to work everywhere, and the real-world performance of some other controller might not be predicted as well by our simple radio model.
If the real-world test does succeed as uniformly as this simulation suggests, we have probably learned more about this particular controller than we have about the simulator or radio model. The controller attempts to ``see through'' transient errors by averaging results over many readings; the (simulated) packet rate is fast enough that a series of transmission attempts are measured before each control decision. A success in the real world suggests that sensors with data rates high enough to be polled many times per control cycle can be more reliable than controllers based on slower sensors.