1
votes

I'm trying to get some rspec tests run using a mix of Capybara, Selenium, Capybara/webkit, and Poltergeist. I need it to run headless in certain cases and would rather not use xvfb to get webkit working. I am okay using selenium or poltergeist as the driver for phantomjs. The problem I am having is that my tests run fine with selenium and firefox or chrome but when I try phantomjs the elements always show as not found. After looking into it for a while and using page.save_screenshot in capybara I found out that the phantomjs browser wasn't loaded up when the driver told it to find elements so it wasn't returning anything. I was able to hack a fix to this in by editing the poltergeist source in <gem_path>/capybara/poltergeist/driver.rb as follows

def visit(url)
  if @started
    sleep_time = 0
  else
    sleep_time = 2
  end
  @started = true
  browser.visit(url)
  sleep sleep_time
end

This is obviously not an ideal solution for the problem and it doesn't work with selenium as the driver for phantomjs. Is there anyway I can tell the driver to wait for phantom to be ready?

UPDATE:

I was able to get it to run by changing where I included the Capybara::DSL. I added it to the RSpec.configure block as shown below.

RSpec.configure do |config|
  config.include Capybara::DSL

I then passed the page object to all classes I created for interacting with the webpage ui.

An example class would now look like this

module LoginUI
  require_relative 'webpage'

  class LoginPage < WebPages::Pages
    def initialize(page, values = {})
      super(page)
    end

    def visit
      browser.visit(login_url)
    end

    def login(username, password)
      set_username(username)
      set_password(password)
      sign_in_button
    end

    def set_username(username)
      edit = browser.find_element(@selectors[:login_edit])
      edit.send_keys(username)
    end

    def set_password(password)
      edit = browser.find_element(@selectors[:password_edit])
      edit.send_keys(password)
    end

    def sign_in_button
      browser.find_element(@selectors[:sign_in_button]).click
    end
  end
end

Webpage module looks like this

module WebPages
  require_relative 'browser'
  class Pages
    def initialize(page)
      @page = page
      @browser = Browser::Browser.new
    end

    def browser
      @browser
    end

    def sign_out
      browser.visit(sign_out_url)
    end
  end
end

The Browser module looks like this

module Browser
  class Browser
    include Capybara::DSL
    def refresh_page
      page.evaluate_script("window.location.reload()")
    end

    def submit(locator)
      find_element(locator).click
    end

    def find_element(hash)
      page.find(hash.keys.first, hash.values.first)
    end

    def find_elements(hash)
      page.find(hash.keys.first, hash.values.first, match: :first)
      page.all(hash.keys.first, hash.values.first)
    end

    def current_url
      return page.current_url
    end
  end
end

While this works I don't want to have to include the Capybara::DSL inside RSpec or have to include the page object in the classes. These classes have had some things removed for the example but show the general structure. Ideally I would like to have the Browser module include the Capybara::DSL and be able to handle all of the interaction with Capybara.

2

2 Answers

1
votes

Your update completely changes the question so I'm adding a second answer. There is no need to include the Capybara::DSL in your RSpec configure if you don't call any Capybara methods from outside your Browser class, just as there is no need to pass 'page' to all your Pages classes if you limit all Capybara interaction to your Browser class. One thing to note is that the page method provided by Capybara::DSL is just an alias for Capybara.current_session so technically you could just always call that.

You don't show in your code how you're handling any assertions/expectations on the page content - so depending on how you're doing that you may need to include Capybara::RSpecMatchers in your RSpec config and/or your WebPages::Pages class.

Your example code has a couple of issues that immediately pop out, firstly your Browser#find_elements (assuming I'm reading your intention for having find first correctly) should probably just be

def find_elements(hash)
  page.all(hash.keys.first, hash.values.first, minimum: 1)
end

Secondly, your LoginPage#login method should have an assertion/expectation on a visual change that indicates login succeeded as its final line (verify some message is displayed/logged in menu exists/ etc), to ensure the browser has received the auth cookies, etc before the tests move on. What that line looks like depends on exactly how you're architecting your expectations.

If this doesn't answer your question, please provide a concrete example of what exactly isn't working for you since none of the code you're showing indicates any need for Capybara::DSL to be included in either of the places you say you don't want it.

0
votes

Capybara doesn't depend on visit having completed, instead the finders and matchers will retry up to a specified period of time until they succeed. You can increase this amount of time by increasing the value of Capybara.default_max_wait_time. The only methods that don't wait by default are first and all, but can be made to wait/retry by specifying any of the count options

first('.some_class', minimum: 1)  # will wait up to Capybara.default_max_wait_time seconds for the element to exist on the page.

although you should always prefer find over first/all whenever possible

If increasing the maximum wait time doesn't solve your issue, add an example of a test that fails to your question.