Advertisement

What are some ways to efficiently and quickly code a spreading object?

Started by September 11, 2016 07:00 PM
6 comments, last by nullbear 8 years ago

I'm attempting to rework the flow-mechanics of my system to be faster, more accurate, and more efficient. See the image below for an example of what i'm doing.

Ug5P7KD.gif

So the object stores a 'volume' and spreads its data to the cardinally adjacent tiles, losing a proportional amount of its volume in the process, until all accessible tiles have an equal volume. Simple enough.

I'm trying to find different and ideally more efficient ways to implement this. Keep in mind that there can be more than one 'hotspot' at once, and the collision of different 'hotspots' needs to be taken into consideration.

jy1Ytgs.gif

What are your ideas for handling this?

-----------------------------------------------------------------------

My current idea:

Have a main 'hotspot' controller object which stores both 'active' and 'inactive' tiles in two separate arrays/lists.

Active tiles are 'tiles adjacent to tiles of innequal volume'
Inactive tiles are 'tiles adjacent to tiles of equal volume OR black/wall tiles.'

For each process of the controller:
active tiles equalize with their adjacent tiles, and adjust their volume by (total adjacent tiles / total hotspot tiles * hotspot volume)

active tiles become inactive tiles, and adjacent tiles become active tiles.
inactive tiles adjust their volume by the same amount, but negative.

Upon collision with another hotspot, if the other hotspot is 'newer' it ignores it, and treats it as a wall. Otherwise it overtakes it as a normal 'adjacent' tile. If overtaking it splits the previous hotspot, it creates a new hotspot where it was split (for a total of 3 active hotspots)

When there are no more active tiles, the hotspot dies.

HOWEVER i still have the issue where the 'volume' of the inactive tiles changes even when it wouldn't make sense to change instantly.


FoBfrE7.gif

The biggest issue i have right now is when it 'cuts' another active group into two, because as i understand, i have two options: Allow it to keep processing as though it's a single group. (BAD) or recalculate the group EVERY single time it's overtaken by another active group (ALSO BAD)

~Nullie

Keeping a list of the cells that are at an active edge (as you sound like you are doing) would seem like the 'big win' optimization. Although whether it is a win may depend on the ratio of edges to cells in the whole system. How many cells are you using (is it even worth optimizing more)? Is it typically the size shown in your .gif? Or is it, e.g. the whole screen?

What language are you using? Have you profiled it? Are you sure the code for drawing it isn't more expensive than the calculation of the cells?

What do you mean by more accurate? I'm guessing you don't mean numerically. Do you mean giving, e.g. a round flow around a hotspot rather than a diamond?

Advertisement

there are about a million cells, though only a relatively tiny amount of those are typically 'active' at any given time (1000-10000) Each cell is able to store a LOT of data (specifically, there are at least 150 different variables which need to be 'equalized' between cells)

By more accurate, i mean stuff like:

A X X X X X X X X B

Where X is the active group, it instantly teleports the contents of A to B, and vice versa, without any care as to the distance that they actually are. A and B could be extremely different cells, and could be extremely far away from eachother as well. So the 'superconductive' inactive cells are fine for small groups, but in larger or more extreme groups it can be a lot more jarring.

The 'most' accurate method would be to calculate everything on a cell-by-cell basis, but that seems both slow for the purpose of equalizing things, as well as intensive.

Another concern is that i don't necessarily want things to equalize instantly. IE If the values are:

9, 9, 9, 9, 9, 5

I want it to become: 9, 9, 9, 9, 6| 9, 9, 9, 9, 7| 9, 9, 9, 9, 8| 8, 8, 8, 8, 8| Or something similar. (rounded estimated numbers for laziness)

so again, the ideal for accuracy would be:

5, 7, 7, 7, 9 | 6, 6, 7, 8, 8 | 6, 6.5, 7, 7.5, 8 | 6.5, 7, 7, 7, 7.5 | 7, 7, 7, 7, 7

Where it does not equalize instantly through itself. But this is intensive and slow, and can potentially be redundantly recursive if they are only operating on a cell by cell basis.



How i'm currently handling it, is to only superconduct a small fraction of the delta through the inactive group at a time.

r9tzkxP.gif

~Nullie

So, it sounds like you're doing some sort of floodfill from multiple locations, but also going back and changing all tiles have been flooded? Is that correct? So a variation on a djikstra flood fill?

What's your profiler telling you is taking the most time? Also, I wonder if you might get better performance by cutting some of the data on the cells, or splitting up the cells into multiple arrays. Ie, if you are only updating one set of data during floodfills, you'd be better off having a grid of just that data, and a separate grid for other data, then you're not retrieving the entire chunk of 150 variables when you're only modifying the 136th, 112th and 139th variables. (Cache Coherence)

AH! Maybe a better idea:

aWqWNFz.gif

Keep the spreading logic, but not only superconduct a small amount with your own inactive cells, but with the other inactive cells as well.

vMNWeSi.gif

~Nullie

Ok gradually teasing more information.. is this for a game? Or is it precalculated?

If it is for a game.. is it something you want to change on every frame, like water simulation? With games usually what you would ask is, how many cells can I get away with for a fast calculation, and then design the game around that. Not the other way around. And consider how are you going to get this to the GPU, if you are doing this per frame, and its a big texture (again, not enough background info), or if it is not per pixel, why calculate stuff that is off screen?

These 'griddy things' come up time and time again in game and graphics programming (so much so I have templates in my template library to do the monkey work). I think the official name might be 'continuous cellular automaton', maybe you can do some googling for optimizations.

https://en.wikipedia.org/wiki/Continuous_automaton

http://cell-auto.com/optimisation/

https://arxiv.org/pdf/1208.2428.pdf

If you want to iterate a large field you might be able to do something like do a bit at a time each frame. Or use a more complex iteration over a number of frames, then just use simple interpolation to get per frame positions. There are also versions which don't fit to a grid, continuous spatial automata:

https://en.wikipedia.org/wiki/Continuous_spatial_automaton

I also saw a guy use OpenCL or something on the GPU to speed stuff like this up with impressive results.

Personally I've never managed to get stuff like this running *that* fast, so I'll be as interested as you in any super methods. :) There might be some tricks to store the grid in a cache friendly manner or use SSE. With 'game of life' cellular automatons you could use a simple look up table to look up the result depending on the contents of neighbouring cells, a small LUT might be an option if your rules are complex.

Advertisement

this is basic propagation.

think flood fill plus rate of flow of particles from one tile to the next being proportional to the number of particles in each (IE like thermodynamic transfer).

a steady state is achieved when the particle per tile density of the entire flood filled area is uniform.

EDIT:

ok a little more:

so you do a flood fill, but number of particles in each tile determines the rate of spread from one tile to the next. so you'd have to repeatedly solve for inflow and outflow of each tile until enough cycles had passed for your "gas to disburse uniformly in your area" so to speak. i did something like this back in school, i can't remember what its was... its was something related to inflow and out flow to/from multiple sources at different rates.

note that flood fill will tell you what tiles to process, but you'll probably want to use the current version of the map to calculate the next version, then replace the current with the next (IE process all, then update all). not process and update tiles one at a time (like flood fill does recursively). if you update one at a time, you could end up mixing old and new values, as in the tile north has been updated, but south, east, and west have not yet, so i'm using the "new" north, and the "old" east, west, and south values to calculate the "new" value for the current tile.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

@lawnjelly the graphics calculations aren't a concern at all. The intensive bit is the mixing and equalizing of data between cells. It's called about twice per second. (it is a 'passive' fluid simulation for a game, though typically gasses as opposed to liquids.)


@norman barrows:

Yes, i know what it is. I'm just not sure how to quickly and efficiently do it on a potentially large scale.

I've explained that it's for fluid simulation, typically of gasses. Here is an example of what you might see:

21 moles of oxygen, 80 moles of nitrogen, at 20 degrees celsius in 'group A'

100 moles of carbon dioxide, at 1000 degrees celsius, in 'group B'

Not only are we spreading the molar count evenly between the tiles, but also the temperature (which affects the pressure of the fluid, and thus the rate at which it disperses)

~Nullie

This topic is closed to new replies.

Advertisement