I've been working on a fuzzy logic SDK for the past 3 months now, and it's come to the point where I need to start heavily optimizing the engine.
As with most "utility" or "needs" based AI systems, my code works by placing various advertisements around the world, comparing said advertisements against the attributes of various agents, and "scoring" the advertisement [on a per-agent basis] accordingly.
This, in turn, produces highly repetitive graphs for most single-agent simulations. However, if various agents are taken into account, the system becomes highly complex and drastically harder for my computer to simulate (Since agents can broadcast advertisements between one another, creating an NP algorithm).
Bottom: Example of the systems repetitiveness calculated against 3 attributes on a single agent:
Top: Example of the system calculated against 3 attributes and 8 agents:
(Collapse at the beginning, and recovery shortly after. This is the best example I could produce that would fit on an image, since the recoveries are generally very slow)
As you can see from both examples, even as the agent count increases, the system is still highly repetitive, and therefore wasting precious computation time.
I've been trying to re-architecture the program so that during periods of high repetitiveness, the Update function only continuously repeats the line graph.
While it's certainly possible for my fuzzy logic code to predict to calculate a collapse and or stabilization of the system, it's extremely taxing on my CPU. I'm considering machine learning would be the best route to take for this since it seems that once the system has had its initial set up created, periods of instability always seem to be about the same length (However they occur at "semi" random times. I say semi since its usually easily noticeable by distinct patterns are shown on the graph; however, like the length of instability, these patterns vary greatly from setup to set up).
If the unstable periods all the same time length, once I know when the system collapses it's significantly easy to figure out when it'll reach an equilibrium.
On a side note about this system, not all configurations are 100% stable during periods of repetition.
It is very clearly shown in the graph:
So the machine learning solution would need a way to differentiate between "Pseudo" collapses, and fully collapses.
How viable would using an ML solution be? Can anyone recommend any algorithms or implementation approaches that would work best?
As for available resources, the scoring code does not map well at all to parallel architectures (Due to the sheer interconnections between agents), so if I need to dedicate one or two CPU threads to do these calculations, so be it. (I'd prefer not to use a GPU for this, as the GPU is being taxed with an unrelated non-AI part of my program).
While this most likely won't make a difference, the system that the code is running on has 18GB of RAM left during execution. So, using a potentially highly data reliant solution would be most certainly viable. (Although I'd prefer to avoid it unless necessary).