Q & A with Alex Kogon and Michael Palotas – Selenium meetup at Fyber

Scalability and QA test automation have long been a hot topic for fast-growing tech companies. How do you handle, in a quick and efficient manner, your product or platform tests when they reach thousands? How do you ensure that no unexpected crashes or bugs result once the product is live? Quality assurance should be an integral part of any ad tech product development process, however, the question is: Which tool should you use for quality assurance tests and how should you arrange the testing funnel in an agile software development life cycle?

IMG_20150528_184716Fyber was lucky enough not only to host the first Berlin Selenium QA meetup, but also to interview with not one, but two, fantastically knowledgeable Selenium users, test engineering and development experts, and software industry veterans: Alex Kogon and Michael Palotas.

For those that have never heard of Selenium, can you explain what it is and how it fits into the real world of automation?

Alex: Selenium is an open source, widely-used standard that allows you to interact with the web pages exactly as a user would. It allows you to write automated scripts to replace manual QA testing with automated testing, that loads the browser, interacts with the browser, does everything that a user would. If you have QA engineer loading one browser manually, he would have to repeat the process with a number of browsers and multiple times, while automated testing allows you to run continuously against as many different environments as you want to. It has been around for about five years, but has become the standard in the ecosystem.

Michael: It fits in the overall automation world; it is at the very top of the testing pyramid, on top of Selenium there is only manual testing. In my opinion, Selenium’s place is functional regression. I would leave usability and performance tests to other tools or manual testing.

What in your opinion is key to creating a successful test automation strategy?

Alex: I think that the most important thing for continuous integration is that it has to be as fast as possible, so tune your tests to run as fast as possible and use the environment that performs as fast as possible. The problem is that many people use Sauce Labs, and it runs slowly because of the number of users at any given time. Others choose to use AWS, which is also slow, so if you build your own dedicated Selenium grid that provides the best possible performance, it might cost you more money but you’ll get a much better integration. When a developer changes something, they are supposed to wait until all the changes are integrated to see the result, but if it takes three hours to run the tests, they’ll be unable to check every little change to the code.

Michael: The key is to look at the topic of test automation holistically. I often find that the grid level automation is sitting with the QA teams; it usually is decoupled from the developer’s job. Where test automation fails is not in choosing the tool, but in looking at the testing pyramid in the wrong way. There must be a very close communication and collaboration between teams to set up an automation strategy that follows that test road. In an agile setup it is easier, especially if you have in-house testers, rather than teams that operate on different continents, for example. The more integrated the testing and developing processes are, the better; it’s about looking at it as a whole.

What is the likely future for Selenium, specifically with the shift towards mobile?

Alex: Selenium is just a protocol, but when you learn about Selenium, it’s usually related to web browser driving. There is Appium, which has the same type of interface, but you use different selectors. Instead of using selectors that give you objects on a web page, you use selectors that give you elements in the Android or iOS interface. I haven’t worked with it myself, but it looks quite interesting. Selenium was written for web, but all that matters is the user interface elements as objects, so as long as you can allow user interface elements as objects and are able interact with them, then you can use it.

Michael: The mobile topic, of course, is something that is being addressed by Selenium by developing tools such as Appium and Selendroid. There are already possibilities for Selenium to address mobile; it’s not yet perfect, but Selenium will have to move away from exclusively working with web browser testing and will eventually have to have mobile devices seamlessly integrated into their service. I am hoping that this is the route Selenium will take.

End to end tests are notorious for being brittle and slow. What is the best approach in carrying out the necessary tests, but not slowing down the testing process?

Michael: One thing is to have as few tests on the top level as possible, which still means you will have a few hundred tests to run. Another thing is the scaling aspect; make sure that you have a scalable infrastructure. I, personally, have set up an in-house grid with enough horsepower to allow scaling. As for the performance part, a big enough grid would mean that your test execution time equals the time that it takes you to run the longest test. Of course, that is theoretically speaking, it would be a matter of a few minutes. Companies often start without a grid or scaling in mind, and then when they think they reached a certain size they just throw in a Selenium grid and the whole thing explodes; nothing works, tests become brittle, they pass, they fail, nobody knows why. When you take a deeper look, you’ll see that tests weren’t written in an atomic way. Test data management is very important; when you have to run a thousand tests at the same time, you need each test to have its own dataset. This is not something you should think about when scaling becomes necessary, but something to consider from day one. A good rule of thumb is to use Selenium when you need the very top layer of the testing pyramid evaluated. When you need the visual part tested, the consistency level testing, as part of your user acceptance test for a web product. Selenium is paramount for these type of tests. However, for algorithm tests, there is no need for Selenium. Always remember, the fewer tests you have on the higher level, the better.

If I am new to Selenium, where do you suggest I start?

Alex: I don’t think that writing Selenium is very difficult at all, of course it depends on how much technical knowledge you possess. If you are a tester who knows how to drive a web browser but you never got to see the HTML code behind it, it may be a little bit difficult, but most important is to have the aptitude and the will to learn. I think the problem is that many developers don’t want to be test developers. But to be a good tester, you have to be a good developer. There are quite a few people in testing who are not developers and they perform just fine, but once you go into creating architecture that allows you to maintain your system more affordably, that’s where you have to know coding. Selenium testing is quite simple, but knowing how to maintain it properly requires a good level of software skills.

Michael: Bad news is, unfortunately, there is no good documentation for Selenium right now (laughs). A good starting point is still the Selenium HQ website, just to get a feel for it. Pick a language you are comfortable with, assuming that you are comfortable with any one coding language. Selenium is great because it supports most coding languages. Install Selenium and play around with it; and if you don’t understand something, Google it or attend the meetups in Berlin!


Read these next

Contact Us

    By sharing your information you are agreeing to receive communications in regards to any questions or requests submitted on this form. Fyber will keep your information solely for internal tracking purposes and will not use this information for any other purpose. You may request to delete the information provided at any time.

    If you send us a message by clicking the "Send" button, we use a recaptcha service provided by Google LLC to check whether the message was sent by a natural person or a computer program ("bot") in order to ensure that only valid user requests are forwarded to us. Google LLC processes personal information from your browser, such as your browser settings and your click behavior on this screen. Please refer to the Privacy Policy for further information on data processing procedures of our third-party services.