Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
+1 vote
2 views
in DevOps and Agile by (19.7k points)

I am running several tests with WebDriver and Firefox.

I'm running into a problem with the following command:

WebDriver.get(www.google.com);

With this command, WebDriver blocks till the onload event are fired. While this can normally take seconds, it can take hours on websites which never finish loading.

What I'd like to do is stop loading the page after a certain timeout, somehow simulating Firefox's stop button.

I first tried to execute the following JS code every time that I tried loading a page:

var loadTimeout=setTimeout(\"window.stop();\", 10000);

Unfortunately, this doesn't work, probably because :

Because of the order in which scripts are loaded, the stop() method cannot stop the document in which it is contained from loading 1

UPDATE 1: I tried to use SquidProxy in order to add connect and request timeouts, but the problem persisted.

One weird thing that I found today is that one web site that never stopped loading on my machine (FF3.6 - 4.0 and Mac Os 10.6.7) loaded normally on other browsers and/or computers.

UPDATE 2: The problem apparently can be solved by telling Firefox not to load images. hopefully, everything will work after that...

I wish WebDriver had a better Chrome driver in order to use it. Firefox is disappointing me every day!

UPDATE 3: Selenium 2.9 added a new feature to handle cases where the driver appears to hang. This can be used with FirefoxProfile as follows:

FirefoxProfile firefoxProfile = new ProfilesIni().getProfile("web");

firefoxProfile.setPreference("webdriver.load.strategy", "fast");

I'll post whether this works after I try it.

UPDATE 4: at the end, none of the above methods worked. I end up "killing" the threads that are taking to long to finish. I am planning to try Ghostdriver which is a Remote WebDriver that uses PhantomJS as back-end. PhantomJS is a headless WebKit scriptable, so i expect not to have the problems of a real browser such as firefox. For people that are not obligate to use firefox(crawling purposes) i will update with the results

UPDATE 5: Time for an update. Using for 5 months the ghostdriver 1.1 instead FirefoxDriver i can say that i am really happy with his performance and stability. I got some cases where we have not the appropriate behaviour but looks like in general ghostdriver is stable enough. So if you need, like me, a browser for crawling/web scraping purposes i recomend you use ghostdriver instead firefox and xvfb.

1 Answer

0 votes
by (62.9k points)

I was able to get around this by doing a few things.

First, set a timeout for the webdriver. E.g.,

 WebDriver wd;

... initialize wd ...

wd.manage().timeouts().pageLoadTimeout(5000, TimeUnit.MILLISECONDS);

 

Second, when doing your get, wrap it around a TimeoutException. (I added  a UnhandledAlertException catch there only for good measure.) E.g.,

for (int i = 0; i < 10; i++) {

    try {

        wd.get(url);

        break;

    } catch (org.openqa.selenium.TimeoutException te) {  ((JavascriptExecutor)wd).executeScript("window.stop();");

    } catch (UnhandledAlertException uae) {

        Alert alert = wd.switchTo().alert();

        alert.accept();

    }

 }

 

This basically tries to load the page, but if it times out, it forces the page to stop loading via javascript, then tries to get the page again. It might not facilitate in your case, but it definitely helped in mine, particularly when doing a web driver's getCurrentUrl() command, which can also take too long, have an alert, and need the page to prevent loading before you get the URL.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...