Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Java by (4.2k points)
recategorized by

I'd like to write a "smart monitor" in Java that sends out an alert any time it detects oncoming performance issues. My Java app is writing data in a structured format to a log file:

<datetime> | <java-method> | <seconds-to-execute>

So, for example, if I had a Widget#doSomething(String) method that took 812ms to execute, it would be logged as:

2013-03-24 11:39:21 | Widget#doSomething(String) | 812

As performance starts to degrade (such as during a major collection, during peak loads, or if the system is just slowing to a crawl), method execution timings start to slow down; so the right-most column starts to see huge numbers (sometime 20 - 40 seconds to execute a single method).

In college - for a machine learning exercise - I wrote what my professor called a linear dichotomizerthat took simple test data (the height, weight and gender of a person) and "learned" how to categorize a person as male or female based on their height/weight. Then, once it had all its training data, we fed it new data to see how accurately it could determine gender.

think the multivariate version of a linear dichotomizer is something called a support vector machine(SVM). If I'm wrong, then please clarify and I'll change the title of my question to something more appropriate. Regardless, I need this app to do the following things:

  • Run in a "test mode" where I feed it the structured log file from my main Java app (the one I wish to monitor) and it takes each log entry (as shown above) and uses it for test data
    • Only the java-method and seconds-to-execute columns are important as inputs/test data; I don't care about the datetime
  • Run in "monitor mode" where it is actively reading new log data from the log file, and using similar "machine learning" techniques to determine if a a performance degradation is looming

It's important to note that the seconds-to-execute column is not the only important factor here, as I've seen horrible timings for certain methods during periods of awesome performance, and really great timings for other methods at times when the server seemed like it was about to die and push daisies. So obviously certain methods are "weighted"/more important to performance than others.

My question

  • Googling for "linear dichotomizer" or "support vector machines" turns up some really scary, highly-academic, ultra-cerebral white papers that I just don't have the mental energy (nor time) to consume - unless they truly are my only options; so I ask is there a laymen's introduction to this stuff, or a great site/article/tutorial for building such a system in Java?
  • Are there any solid/stable open source Java libraries? I was only able to find jlibsvm and svmlearn but the former looks to be in a pure beta state and the latter seems to only support binary decisions (like my old linear dichotomizer). I know there's Mahout but that sits on top of Hadoop, and I don't think I have enough data to warrant the time and mental energy into setting up my own Hadoop cluster.

Thanks in advance!

1 Answer

0 votes
by (6.8k points)

A "smart monitor" you describe is exactly time-series classification.

There are many classification algorithms. They all basically take a matrix, where the rows are observations and the columns are "features" that somehow describe the observation, and a label vector of length rows that is valued either 0 or 1. In your problem an observation maybe a second sample, and your label vector will be valued 1 for the time periods that are experiencing performance issues and 0 otherwise.

Implicit in this definition is the need to resample your data(using the mode/median/mean if necessary) such that each observation is defined evenly, such as seconds or minutes or hours.

Generating the feature is a crucial part. I'd in all probability begin with two features, the raw values and the (once) differenced values between observation x_i and x_i-1. We'll define these for a lag of 2. Technically making these 4 features. Each feature can't look into the future.

Each the feature should represent a similar thing for every observation.

For example consider the time-series of length 10:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

If we want to produce a set of features using lag two intervals in the past then the first two elements of the time-series is considered a burnt-in sample. We can't use the observations related to them to train out algorithm.

The raw values, of 8 rows by 2 columns would be

[[ 1., 0.]

[ 2., 1.],

[ 3., 2.],

[ 4., 3.],

[ 5., 4.],

[ 6., 5.],

[ 7., 6.],

[ 8., 7.]]

The differenced values

[[ 1., 1.],

[ 1., 1.],

[ 1., 1.],

[ 1., 1.],

[ 1., 1.],

[ 1., 1.],

[ 1., 1.]])

These get column stacked. There are many additional features you could explore. Rolling mean would be my next pick.

If you would like to predict further within the future then your training knowledge should be insulating material beyond your label vector.

If performance is not satisfactory then attempt adding additional options by selecting a rolling mean over a much bigger window or add additional back within the future. A clever trick to boost the performance of time-series algorithms is to incorporate the worth of the prediction for the previous quantity.

Fit your classifier on some early a part of the info, then observes its accuracy over a later a part of the info. There are several metrics for classifiers you'll be able to use. If you decide on to use a classifier that outputs chances rather than laborious 1/0, then your options even broaden. (As does the uses of your classifier.)

Precision and recall are intuitive performance metrics of classifiers.

Train on the first (early) half of your data and test on the second half (later).

As way as algorithms go, I'd look into logistic regression. I'd solely look elsewhere if the performance is not satisfactory and you have exhausted feature extraction choices.

Mallet appears to be a good library for the task. See this bit of the docs.

I recently discovered JSAT, which looks promising.

There are more specific approaches to time-series classification that explicitly take into account the sequential nature of the observations and labels. This is a general-purpose adaptation of classification to time-series.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...