Runtime Verification can make your design precise by writing it in a formal language. If needed, we can prove properties of the design, generate an executable implementation, or develop other tools based on the design. Because every engagement is different, we don't have a "fill in the blanks" template. Instead, we'll provide some examples to help you grasp the range of possibilities.
Compilers turn programs written in high-level languages into programs written in raw bytes. Those bytes are then interpreted by what's misleadingly called a "machine." Sometimes the machine is implemented in silicon (called a "CPU" or a "core"), but sometimes it's another program (called "a virtual machine" or "VM").
The raw bytes can be translated into text, but the result is unfriendly to human readers. Here's a part of one:
mstore(0x40, 0x80) callvalue dup1 iszero tag_1 jumpi
In 2017, IOHK funded the University of Illinois and Runtime Verification to (1) develop KEVM, a formal specification for the EVM, and (2) use the specification to automatically generate a correct-by-construction implementation (a virtual machine) that was (3) fast enough to be used in a test network. The project was a success.
The work also included proofs of correctness (or incorrectness) of smart contracts. The same specification that produces KEVM is used to show that a contract does what it's supposed to. That's an example of Runtime Verification's approach: do the hard work of creating a formal specification first, then reap the benefits by getting tools and capabilities for less money than if they were independently developed.
The KEVM work formalized an existing virtual machine design. That experience, plus prior work on the LLVM compiler framework, suggested a new virtual machine with the potential for being faster than EVM, inherently more resistant to smart contract exploits, and designed to make proofs of contract correctness easier. IOHK funded that virtual machine, dubbed IELE, and later deployed it to their test network.
We began with an existing executable formal semantics for C, written using the K toolkit. It obeyed the official C11 specification, which takes care to call out undefined behavior. If a program does something "that's defined to be undefined," like dividing by zero, anything is allowed to happen. Because compiler writers can take advantage of undefined behavior to produce faster code in the defined cases, the scope of "anything" can be surprisingly broad. For example, a compiler could (metaphorically) say, "if this variable is zero here, there would have been a divide-by-zero attempt back there, so 'anything' gives me license to discard this entire chunk of code." That can, in the name of efficiency, escalate a simple wrong answer into a much more severe bug, like a security breach.
Our first C semantics produced a C interpreter that handled undefined behavior in the usual way: if a program it was running did something undefined, the interpreter did whatever was convenient for it, not the programmer. However, if anything is allowed, and efficiency isn't the overriding goal (as it is for C compilers), why not do something useful, like print a "you divided by zero on line 3838" error message?
Support from NASA, Boeing, Toyota, and Denso had us do just that: extend the semantics so that undefined behavior was explicitly defined to be helpful to the programmer. This turned out to be a big undertaking — there are a lot of undefined behaviors in C — but it also led to the RV-Match tool.
In addition to the C specification mentioned above, we have formalized specifications for the Solidity, Plutus, and Vyper blockchain languages. Just as a formal specification for a virtual machine's bytecode language produces a virtual machine, a formal specification for a high-level language produces an interpreter or compiler. Once the specification is finished, specialized tools (like the undefined-behavior checker described above) can be built from it.
Blockchains are evolving as their earlier limitations become more important and new applications are discovered. However, each such change requires the many computers that collectively run the blockchain to implement new algorithms. For example, Ethereum's Serenity release requires those computers to use an algorithm, RANDAO, that forces all of them to agree on random numbers. It's as if all the computers all over the world agreed to get heads or tails by flipping one particular coin.
Like all blockchain algorithms, RANDAO has to resist active attempts at subversion. In RANDAO's case, the attackers would be trying to bias the random numbers. By analogy, that would be like shaving the single worldwide coin so that it comes up heads 50.01 percent of the time. In the small scale, that wouldn't matter, but on the scale of a blockchain, such a small difference can be multiplied into large illicit gains.
Blockchain systems are open in the sense that anyone can buy computers and help compute the blockchain. The question that was posed to Runtime Verification by the Ethereum Foundation is: how badly could an affordable set of computers bias the random numbers?
RV has expertise in many approaches, and we determined that term rewriting (using Maude) and statistical model checking (using PVeStA) best suited the problem. Using them, we found that a large number of computers would be required to produce only a small bias unless the attack were continuous and lengthy. A lengthy attack both increases the chance of detection and also costs the attacker increasing amounts of money. (Computers involved in RANDAO must make deposits to participate, and the attack requires them to forfeit those deposits. In realistic situations, the attacker's gain is unlikely to be worth the investment.)
As a result, RANDAO can be deployed with more confidence than before.