Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I've been working on a fuzzy logic SDK for the past 3 months now, and it's come to the point where I need to start heavily optimizing the engine.

As with most "utility" or "needs" based AI systems, my code works by placing various advertisements around the world, comparing said advertisements against the attributes of various agents, and "scoring" the advertisement [on a per-agent basis] accordingly.

This, in turn, produces highly repetitive graphs for most single-agent simulations. However, if various agents are taken into account, the system becomes highly complex and drastically harder for my computer to simulate (Since agents can broadcast advertisements between one another, creating an NP algorithm).

Bottom: Example of the systems repetitiveness calculated against 3 attributes on a single agent:

Top: Example of the system calculated against 3 attributes and 8 agents:

exp-system

(Collapse at the beginning, and recovery shortly after. This is the best example I could produce that would fit on an image, since the recoveries are generally very slow)

As you can see from both examples, even as the agent count increases, the system is still highly repetitive, and therefore wasting precious computation time.

I've been trying to re-architecture the program so that during periods of high repetitiveness, the Update function only continuously repeats the line graph.

While it's certainly possible for my fuzzy logic code to predict to calculate a collapse and or stabilization of the system, it's extremely taxing on my CPU. I'm considering machine learning would be the best route to take for this since it seems that once the system has had its initial set up created, periods of instability always seem to be about the same length (However they occur at "semi" random times. I say semi since its usually easily noticeable by distinct patterns are shown on the graph; however, like the length of instability, these patterns vary greatly from setup to set up).

If the unstable periods all the same time length, once I know when the system collapses it's significantly easy to figure out when it'll reach an equilibrium.

On a side note about this system, not all configurations are 100% stable during periods of repetition.

It is very clearly shown in the graph:

exception

So the machine learning solution would need a way to differentiate between "Pseudo" collapses, and fully collapses.

How viable would using an ML solution be? Can anyone recommend any algorithms or implementation approaches that would work best?

As for available resources, the scoring code does not map well at all to parallel architectures (Due to the sheer interconnections between agents), so if I need to dedicate one or two CPU threads to do these calculations, so be it. (I'd prefer not to use a GPU for this, as the GPU is being taxed with an unrelated non-AI part of my program).

While this most likely won't make a difference, the system that the code is running on has 18GB of RAM left during execution. So, using a potentially highly data reliant solution would be most certainly viable. (Although I'd prefer to avoid it unless necessary).

1 Answer

0 votes
by (108k points)

This is a problem often encountered in the engineering of control systems. It is usually referred to as a black box time-series modeling problem. It’s a “black box” in the sense that you don’t know what exactly inside. You give it some inputs and you can measure some outputs. Given a sufficient amount of data, a sufficiently simple system, and an appropriate modeling technique, it is often possible to approximate the behavior of the system.

Many modeling techniques for this revolve around taking a certain discrete number of past inputs and/or measurements and attempting to predict what the next measurement in time will be. This is often referred to as an autoregressive model.

Depending on the complexity of the system you’re attempting to model, a nonlinear autoregressive exogenous model might be a better choice. This could take the form of a neural network or radial basis function which once again takes the past n measurements in time as inputs and gives a prediction of the next measurement as an output.

Resembling your data, applying similar techniques, simple models of oscillatory behavior could be easily built. With regards to your collapse or pseudo-collapse modeling, I think that this could be captured by using a sufficiently complex model, but maybe more difficult.

If you wish to learn more about Machine Learning visit this Machine Learning Course.

Browse Categories

...