Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)
I work for a webhost and my job is to find and cleanup hacked accounts. The way I find a good 90% of shells\malware\injections is to look for files that are "out of place." For example, eval(base64_decode(.......)), where "....." is a whole bunch of base64'ed text that is usually never good. Odd looking files jump out at me as I grep through files for key strings.

If these files jump out at me as a human I'm sure I can build some kind of profiler in python to look for things that are "out of place" statistically and flag them for manual review. To start off I thought I can compare the length of lines in php files containing key strings (eval, base64_decode, exec, gunzip,  gzinflate, fwrite, preg_replace, etc.) and look for lines that deviate from the average by 2 standard deviations.

The line length varies widely and I'm not sure if this would be a good statistic to use. Another approach would be to assign weighted rules to cretin things (line length over or under threshold = X points, contains the word upload = Y points) but I'm not sure what I can actually do with the scores or how to score the each attribute. My statistics is a little rusty.

Could anyone point me in the right direction (guides, tutorials, libraries) for statistical profiling?

1 Answer

0 votes
by (33.1k points)

There is a simple machine learning approach to the problem, to get started on this problem and develop a baseline classifier:

You should build a corpus of scripts and attach a label either 'good' (label= 0) or 'bad' (label = 1) the more the better. The 'bad' scripts are a reasonable fraction of the total corpus, 50-50 good/bad is ideal.

Develop binary features that indicate suspicious or bad scripts. 

For example, the presence of 'eval', the presence of 'base64_decode'. Be as comprehensive as you can be and don't be afraid of including a feature that might capture some 'good' scripts too. One way to help to do this might be to calculate the frequency counts of words in the two classes of the script and select as features words that appear prominently in 'bad' but less prominently in 'good'.

Run the feature generator over the corpus and build up a binary matrix of features with labels.

Split the corpus into the train (80% of examples) and test sets (20%). Using the scikit learn library, train a few different classification algorithms (random forests, support vector machines, naive Bayes, etc) with the training set and test their performance on the unseen test set.

I have a reasonable classification accuracy to benchmark against. Then we will look at improving the features, some unsupervised methods, and more specialized algorithms to get better performance.

Hope this answer helps you! Thus, for more details study the Machine Learning Algorithms. Also, study the Python Tutorial would be of great benefit.

Browse Categories

...