TfxIdf is currently one of the most famous search method. What you need is some preprocessing from Natural Langage Processing (NLP). There is a lot of resources that can help you for English (for example the lib 'nltk' in python).
You must use the NLP analysis both on your queries (questions) and on yours documents before indexing.
The point is: whereas TF-IDF (or tfxidf^2 like in Lucene) is good, you should use it on the annotated resources with meta-linguistics information. That can be arduous and need in-depth data regarding your core program, grammar analysis (syntax) and the domain of document.
cosine similarity on latent linguistics analysis (LSA/LSI) vectors works loads higher than raw tf-IDF for text cluster, though I admit I haven't tried it on Twitter data.
Topic models like LDA would possibly work even higher.
Since they are a part of Machine Learning Course, understanding Tf-idf will open a lot of gateways for a Machine Learning newbie.