Skip to main content

News

Topic: TurboSphere (Read 159909 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • Rahkiin
  • [*][*][*]
Re: TurboSphere
Reply #420
Wow this is awesome!

Makes me reconsider my mac engine: I could just fork/pullrequest TS and write plugins... currently i only use my engine to experiment anyways. I don't see myself writing a graphics engine as well as yours!

If i find the time, ill run it on my mac tomorrow.

Feature request: V8 remote debugging (allowing it. Can use d8 to debug).

I might cease working on the engine and continue on my IDE instead and a builtin debugger is a must-have!

Keep the work up!

// Rahkiin

Re: TurboSphere
Reply #421
Thanks!

It would be awesome to have another developer! I've really tried to make the plugin-writing experience as friendly as I can...but I'm the only one who has ever used it! Getting more feedback would be extremely helpful.

I see a lot of ancient and half-finished documentation on the V8 remote debugger system. I'd have to take a close look. Perhaps I should create a 'Debug' plugin, one that adds console writing, remote debugging, etc.

  • Rahkiin
  • [*][*][*]
Re: TurboSphere
Reply #422
You actually only ned to enable debugging, add a flag to wait for debugger, and handle debug messages (depends on how you do synchronization). I did it toally in 10 lines or less.

The actual debugging can be done using d8, or writing your own program that uses the debug protocol, that still works.

Re: TurboSphere
Reply #423
Debugging is enabled on the Windows and OS X builds of V8 I distribute. So I guess I'd just need to actually push debug messages.

I'ev added JPEG and TGA saving (again, done better than in SDL_GL_Threaded), as well as cleaning up PNG saving. libpng is SO much nicer than libjpeg (and API clones). Making the important structs have totally hidden implementations makes loading at runtime possible. I wish libjpeg did this, particularly since I'd like to be able to try to link to libjpeg-turbo, libjpeg/simd, and moz-jpeg explicitly before defaulting to whatever generic libjpeg the system has. Otherwise, I'd need to put fairly large amounts of libjpeg inside TS to be able to load it properly at runtime. So now it's a compile-time flag.

Re: TurboSphere
Reply #424
I've written a new BMPFont plugin. It's designed so that it can also be used by external tools (like a TTF-RFN converter), in addition to being used as a plugin.

The new plugin uses Sapphire's API for extending Surfaces. The following script demonstrates using BMP fonts with Surfaces in Sapphire:

Code: (javascript) [Select]

RequireSystemScript("colors.js");

var s = new Surface(64, 64, Black);
var Fonty = GetSystemFont();

s.drawText(Fonty, 0, 0, "Sphere");
s.drawText(Fonty, 4, 16, "Fonts!");

var TextImage = new Image(s);
var DefaultShader = GetDefaultShaderProgram();

var Vertices = [new Vertex(0, 0),   new Vertex(64, 0),  new Vertex(64, 64), new Vertex(0,  64) ];

var TextShape = new Shape(Vertices, TextImage);
var TextGroup = new Group(TextShape, DefaultShader);


The new BMPFont plugin can handle fonts with up to 2^32 glyphs, in Sphere RFN format version 1 or 2. It can print UTF-8 (v1 and v2) and ASCII strings. Obviously, we'd need to make some unicode RFN fonts to really take advantage of that, but now it's not an inherent limitation.

I plan on making a new TTF-RFN converter that can handle full unicode codepoints that uses this plugin's rfn API as its backend, in combination with SDL_ttf (or perhaps freetype directly?).
  • Last Edit: July 28, 2014, 04:16:59 am by Flying Jester

  • N E O
  • [*][*][*][*][*]
  • Administrator
  • Senior Administrator
Re: TurboSphere
Reply #425
Re Unicode RFN fonts - I remember coming across problems when trying to convert a TTF with 1000s of characters (might have been Arial Unicode MS, I don't quite remember) using the vanilla editor's conversion function, so I'm not entirely sure if the vanilla engine was ever able to handle such large RFNs in the first place. This would be a huge step forward in the I18N aspect of game dev in Sphere.

Re: TurboSphere
Reply #426
There are two issues with Unicode support: An RFN can only have 2^16 chars. You need more than 24 bits to represent all possible UTF code points, and UT8 version 1 in particular can theoretically have significantly more. Since the vanilla tools zero out all reserved data blocks, I read 2 extra bytes, and use them as the most significant bytes for checking the number of chars. So no vanilla RFN file will ever read as having more than 2^16 chars, and backwards compatibility is maintained.

The other issue is that the engine needs to be Unicode aware, and convert characters from Unicode strings to a series of codepoints. This is the part that I don't believe that Sphere 1.x does. I haven't actually checked, mind you, but I've never seen it done, and I seem to recall any extended chars being represented by the same certain char number (127?). TurboSphere parses Unicode strings and converts them to a series of code points, and skips codepoints that the font doesn't contain. Ideally, I would do something different (like the hex code dumps that you see in Unix a lot, or just 'char not found' characters).

One issue with that is that a RFN font must necessarily be at least 256+(32*largest_code_point) of any font. So if you imported an ascii font, but then added, say, code point 2002 (nbs), the font must be at least 292 KB plus however big the font was anyway. On the bright side, it would compress very well  ;D That's not properly a limitation, though. It's just a minor inconvenience.
  • Last Edit: July 28, 2014, 06:20:27 pm by Flying Jester

Re: TurboSphere
Reply #427

One issue with that is that a RFN font must necessarily be at least 256+(32*largest_code_point) of any font. So if you imported an ascii font, but then added, say, code point 2002 (nbs), the font must be at least 292 KB plus however big the font was anyway. On the bright side, it would compress very well  ;D That's not properly a limitation, though. It's just a minor inconvenience.

You should consider just bumping the version number and including a new table in the format with the character ranges that the font contains. e.g. an array of (int32, int32). Then your font could have simply 0x00 0xFF 0x2002 0x2002 as the range table and only include 256 glyphs. Wouldn't be too hard to implement.

Re: TurboSphere
Reply #428
I could. I'm not too keen on implementing any new formats until they have editor support, though.

I've update TurboSphere to use the new version of T5. Which means that it can use either INI or JSON for its configuration files!
It's also the first step to transparent compression and archive support.

Re: TurboSphere
Reply #429
Working on Surface.transformBlitSurface (with a screenshot of RunnerGuy ;D ). It's almost done, except for some rounding errors (as illustrated by the occasional holes). I also need to add the side-juggling code, so that it actually uses the longest sides first.

Note that my implementation does not use affine transformation, but instead uses triple bresenhams with UV mapping. Because of this, the entire quad is rendered as a smooth surface (as though it existed in 3D space), instead of as two triangles as in Sphere 1.x.
Since all surface manipulations are done on separate thread, and surface operations can be performed out-of-order, we can do things like this much easier. Even so, we can draw 1920x1080 pixels of surface this way in under 10 ms. This is easily the slowest surface operation, but even my naive implementation (which lacks the third error-breaking pass) can draw very fast--over 100 fps of full HD resolution!
  • Last Edit: August 02, 2014, 03:31:34 am by Flying Jester

Re: TurboSphere
Reply #430
I'm still stumped by how I can't set accessors in new V8--I'm doing it the same way I was before, and the accessor API is not exactly rich, so I'm not sure what else I can do.

I've put in a workaround in the plugin tools. Each JSObj prototype has a vector of accessors associated with it, and they are simply applied to each created object when it is wrapped.

  • Radnen
  • [*][*][*][*][*]
  • Senior Staff
  • Wise Warrior
Re: TurboSphere
Reply #431
I have seen artifacts like in the image above. It is due to rounding, but more importantly it is due to the direction the blit occurs. What should happen is that you 'lock in' the target surface so that the pixels are accessed one at a time, and then you guess them in from the source surface (the image data). If you start from the source surface (image data) and 'guess onto' the target surface you'll get the default source color showing through where the rounding made a mistake.

Now, while the image may not have those holes anymore you'll start to run into an accuracy problem. It turns out though, our eye on large images or grainy textures will never notice you took adjacent colors to fill in the gaps. So we are safe there. Notice how this method does not care about rounding, in fact the 'bad' rounding only aims to enhance what you see.

edit: I should also add that's as basic as it get's, you can of course add filtering on top of that to increase image quality. I only explained a nearest neighbor approach.
  • Last Edit: August 08, 2014, 02:53:14 am by Radnen
If you use code to help you code you can use less code to code. Also, I have approximate knowledge of many things.

Sphere-sfml here
Sphere Studio editor here

Re: TurboSphere
Reply #432

I have seen artifacts like in the image above. It is due to rounding, but more importantly it is due to the direction the blit occurs. What should happen is that you 'lock in' the target surface so that the pixels are accessed one at a time, and then you guess them in from the source surface (the image data). If you start from the source surface (image data) and 'guess onto' the target surface you'll get the default source color showing through where the rounding made a mistake.


I am well aware. But I'm not doing it either way. I'm drawing lines from corner 0 to 1 and then 3 to 2 with a constant distance between each point, and then drawing raster lines between equivalent points on those lines. I have two issues going on in that shot: I am just assuming that the first of the two guides is longer (certainly not always true) and using its length to determine how many raster lines to draw (I need to actually sort the sides so that I order them from largest to smallest and change the resulting UV limits and granularity for each guide and raster line), and two, I'm using rounded coordinates for the raster lines. This makes lines look nice when you want to actually draw a line primitive. Not as much when you want to ensure coverage.

This method is slower, to be sure, but it provides high quality texturing, and very easily avoids affine distortion.

Re: TurboSphere
Reply #433
So, here's a question:

Is there a good reason to make framerate changes be totally in-order?

Currently, FPS throttling is done statelessly. Once you call SetFrameRate, the framerate is changed atomically and immediately--no matter what the current relationship is between the render thread and the engine thread.

So, let's say you draw some frames (just worrying about actual FlipScreens in the pipeline), and you've set the FPS to 30 beforehand:

Code: [Select]

Draw Queue:
[Frame 0|Frame 1|Frame 2|Frame 3]
Engine:
[Frame 4]

So right now, we've sent four frames to the draw queue that aren't drawn yet. The engine has no idea, it's working on the fifth frame, and doesn't care what's happening in the render thread (or that the render thread is behind). All those frames will be drawn at 30 FPS (30-ish ms between each one).

Code: [Select]

[Frame 0 | 30 ms interval | Frame 1 | 30 ms interval | Frame 2 | 30 ms interval | Frame 3]


Then you change the frame rate to 60 FPS.

Ideally (theoretical ideally), if you then submitted two more frames, it would look like this:

Code: [Select]

[Frame 0 | 30 ms interval | Frame 1 | 30 ms interval | Frame 2 | 30 ms interval | Frame 3 | FPS Change Operation | 16 ms interval | Frame 4 | 16 ms interval | Frame 5]


Right now, what would happen is this:

Code: [Select]

The second you change the FPS:
[Frame 0 | 16 ms interval | Frame 1 | 16 ms interval | Frame 2 | 16 ms interval | Frame 3]
Then, with two more frames:
[Frame 0 | 16 ms interval | Frame 1 | 16 ms interval | Frame 2 | 16 ms interval | Frame 3 | 16 ms interval | Frame 4 | 16 ms interval | Frame 5]


And so, in once sense, you've interfered with past events. But you've also performed the action immediately, and applied it based on real time rather than on synchronous timings.

This is actually really different from how Sphere and other totally synchronous engines would do it--you can affect the FPS of frames you submitted in the past! This sounds like kind of a bad idea, because it introduces a certain amount of unpredictable behaviour (how many, if any, frames have their resulting intervals changed). But on the other hand, it is much lighter weight than sending a message through the render queue to make it happen in-order, and I also can't especially think of a reason why you would want to be 100% sure that it executed in order.
  • Last Edit: August 08, 2014, 06:15:40 pm by Flying Jester

  • Radnen
  • [*][*][*][*][*]
  • Senior Staff
  • Wise Warrior
Re: TurboSphere
Reply #434
It's hard to say, you need to see how well that works when controlling a game. Maybe the rendering looks nice, but does the gameplay feel the same to what's going on? Synchronous engines like Sphere are unfortunately the only guaranteed way to make sure the game loop feels right when playing the game.

I read an interesting article on CPU's (the death of Moore's Law) and Video Games get the brunt of the damage because they rely heavily on monolithic synchronous game loops; especially on consoles or handhelds. There's very little in a game you can truly make threaded. If you can somehow make an asynchronous game engine that also feels right when playing, then you would have solved a fundamental gaming problem plaguing game engines like the Unreal engine.

I'd say do some tests.

But I also say that if you used a variable timestep it might not make a difference since it would feel the same.
If you use code to help you code you can use less code to code. Also, I have approximate knowledge of many things.

Sphere-sfml here
Sphere Studio editor here