Generate All XPATH from web page

Why it is important?


Sometimes while performing regression testing, we may need to check all the UI components. For Look and Feel testing this can be really important as certain elements are displayed on one browser but are missing on other. So to extract all xpaths manual becomes very difficult. 
Here I will show you how to get them in one go. We will Automate them and achieve our goal.


Automate and get results?

Before proceeding further, there are some pre-requisites.

1. Check if Java is installed

java -version


2. Download JSoup lib.

Maven :


<dependency>
  <groupId>org.jsoup</groupId>
  <artifactId>jsoup</artifactId>
  <version>1.8.3</version>
</dependency>

Gradle:

compile group: 'org.jsoup', name: 'jsoup', version: '1.8.3'

Download: JSOUP

Now we know we have java installed and where to get jsoup library depending on our project requirement.

Lets Code. You can use any IDE for this. The output will be generated in the console. You can modify the below code according to your own requirement. 


package com.dexterlabs.xpaths;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class GetAllXpaths {
 private final static List<String> path = new ArrayList<>();
 private final static List<String> all = new ArrayList<>();
 public static void main(String[] args) throws IOException {
  // TODO Auto-generated method stub

   org.jsoup.nodes.Document doc = Jsoup.connect("https://w3schools.com/").get();
   parse(doc);
 List<String> allElementsXpath = getAll();
   
   for(int i =0 ;i<allElementsXpath.size();i++){
    System.out.println(allElementsXpath.get(i).toString());
   }
  
 }
 
 
 public static List<String> getAll() {
     return Collections.unmodifiableList(all);
 }

 public static void parse(Document doc) {
     path.clear();
     all.clear();
     parse(doc.children());
 }

 private static void parse(List<Element> elements) {
     if (elements.isEmpty()) {
         return;
     }
     Map<String, List<Element>> grouped = elements.stream().collect(Collectors.groupingBy(Element::tagName));

     for (Map.Entry<String, List<Element>> entry : grouped.entrySet()) {
         List<Element> list = entry.getValue();
         String key = entry.getKey();
         if (list.size() > 1) {
             int index = 1;
             // use paths with index
             key += "[";
             for (Element e : list) {
                 path.add(key + (index++) + "]");
                 handleElement(e);
                 path.remove(path.size() - 1);
             }
         } else {
             // use paths without index
             path.add(key);
             handleElement(list.get(0));
             path.remove(path.size() - 1);
         }
     }

 }

 private static void handleElement(Element e) {
     String value = e.ownText();
     if (!value.isEmpty()) {
         // add entry
         all.add(path.stream().collect(Collectors.joining("/")) + " = " + value);
     }
     // process children of element
     parse(e.children());
 }

}



You can get the working project from GIT.
Generate All XPATH from web page Generate All XPATH from web page Reviewed by Karan Sawhney on 1:23 AM Rating: 5

No comments:

Powered by Blogger.