@Jest: sadly it's called source-code modification, Jurassic doesn't let you do that. Plus that won't help in the case you write over a const; they should be produced as a read-only type of thing, and that means it changes the IL that's produced (really just a var augmented with a read-only privilege).
It seems the System.Reflection.Emit.DynamicMethod is in Mono 3.2, so I'm going to compile a Mono 3.2 version for Win (they don't have the installers up despite boasting the D/L's are up,
) and see if it works well enough for Jurassic to run...
Edit:
YES!!!!!!!!!!!!!
I modified Jurassic to store pre-compiled code in memory. That means update scripts and render scripts don't have to 'wind-up' each time they are used, only when they are set. The map engine runs at 8500 fps now.
Also, I modified Jurassic to read consts, it still treats them like vars though since I'd have to do a lot of legwork to add read/write permissions to the standard var structure. But at least no more regex's. JS is not a regular language almost no programming language is, and so this is for the best.
Edit:
I dropped the fps down to 6200, but now it does this universally. Any map size, with tiles that can change at any moment. Before it was just static images of small maps, which is why the FPS was so high, if I started animating map tiles every frame the FPS would plummet fast. Now it's statically at 6200 no matter when the tiles change or how large the map is.
The tiles immediately on the screen get refreshed every frame, but I don't do what Sphere does and draws each tile to screen separately. I tried that and got 800 static fps. I'm an evil genius. I construct a large vertex map. It's basically an array of 4 vertex points each store a source position and destination position for each corner of a tile. It's constructed only once. This gives a giant map as floating point data. I send this to the graphics card and draw it to screen. Now I can draw a map of any size... And watch the fps plummet to 300. Okay, so I have this large chunk of floating point data that the GPU *can* efficiently parse, how do I reduce the load for maps that are HUGE, and a few games have map that large. I create a cutout of the float data, the algorithm looks like a standard image-processing function that uses scan-lines and offsets to produce a cutout:
private static void CutoutVerts(Vertex[] inverts, int x, int y, int scan)
{
scan <<= 2;
y /= _map.Tileset.TileHeight;
x /= _map.Tileset.TileWidth;
int h = GlobalProps.Height / _map.Tileset.TileHeight + 1;
int length = (GlobalProps.Width / _map.Tileset.TileWidth + 1) << 2;
int offset = (x << 2) + y * scan;
int height = offset + h * scan;
int index = 0;
for (var i = offset; i < height; i += scan)
{
Array.Copy(inverts, i, _cutout, index, length);
index += length;
}
}
It's super fast. I had made an algorithm that filled an array and asked each iteration is the tiles vertices on screen? That approach was terribly slow, and only got really fast when there was a lot of data in the screen and slower the further away you went. At best that approach was 3200 FPS. This is the best and fastest method I found so far since it's mostly direct access: find the start and copy the scans. Since Array.copy copies contiguous data, this really has now become an O(n) algorithm rather than an O(n^2) algorithm.
Okay, so you got a really fast map blitting algorithm, how does it handle the case when every tile on the map must animate? This is the killing blow to every map engine in existence. But not this one. The floating point data is sent to the graphics card each frame. That means the tile only has to modify the 4 source points (it's neighboring animations source) prior to the next draw cycle in order for it to show up when the screen is flipped. So that issue has indeed been covered.
Okay, so how does it handle layers? Not well I'm afraid. All map engines take a hit with layers, it's impossible to not take a hit. Why? Because a new layer is basically a new map engine construct but with different tiles. But look at it this way, to draw a layer it's O(n). To draw many layers it's O(n*l), that's still way better than Sphere which is O(x*y*l). Can it go faster? Yes! Technically I can append layers on to the floating point data-map, and have the GPU zip through all the layers in O(n+l
1+l
2+...+l
n) time, but that will *technically* be no faster except that it doesn't have to re-init for the next draw call.