I think most people here are missing the point of what this would do. GPU.js wouldn't make games on systems with good GPUs to run as fast as the benchmarks might suggest at first. GPUs are good at doing the same thing thousands of times at the same time. it's why they're very good at drawing graphics: Each pixel is run on its own thread on the GPU, and they're all run at the same time with the same instructions, but differing data with regards to colors, textures, or other object-specific data.
There's just no way for you game to run entirely on the GPU, which is what some people seem to think would happen. What the GPU would possibly help improve in MV games is repeated math operations on a lot of data, like moving thousands of objects differently, scaling them, and rotating them. All three of those things can happen in an MV game, but I don't think the tradeoff will make it worth it. To send data to the GPU for it to process, you're going to have overhead - the information has to be sent to the GPU across your motherboard. Then the GPU needs to be free so it can begin working on what it was sent. Then that information has to be sent back to the CPU and stored in RAM.
The benchmarks for GPU.js use a 512x512 matrix. 2D applications like MV use a 3x3 matrix, or 9 elements, whereas the 512x512 matrix is 262,144 elements (or in percent, the 512x512 matrix is 16,384% the size of our 3x3 matrix). To get the same performance benefit, we'd need to be doing about 16.4 thousand matrix multiplcations at once. If an MV game has anywhere near 16,400 moving, rotating, and/or scaling objects at once, then you need to sit down and consider if what you're doing is really the only way it can be done, because that is simply ridiculous.
If your computer has a compatible GPU and your browser is up to date (or you use the desktop client, which is up to date enough), then your GPU will already be used to handle drawing, which is a huge source of lag with applications. The amount of power it takes to render a picture with the CPU is high, but by offloading that to the GPU, you're already seeing performance improvements. You'd need to be doing a lot of complicated math for this to be worth it.
tl;dr No, it's really not going to help much, unless you're trying to compute the 10 billionth digit of pi or something.