PERFORMANCE OPTIMIZATION OF WEB CRAWLER KAVITA GOEL Author

PERFORMANCE OPTIMIZATION OF WEB CRAWLER KAVITA GOEL Author
Brand: KAVITA GOEL
32 USD
Buy Now

Optimizing performance of crawler is the requirement in the area of crawling and searching. As web contains a huge volume of information and it is a challenging task to extract information, there is a need of the user interface for a user to extract information from the web. A Search engine is the user interface to extract information from web and crawler is a tool used by search engine to create a database. Search results generated by the search engine are derived from the results given by crawler. If crawler provides better results, it will give relevant results in searching also. Finding useful information from the web has inherent issues of page freshness, crawling multimedia contents, and duplicate contents. Crawling and indexing similar contents and URLs implies wastage of resources. Crawler gives duplicate results because of the bad crawling algorithm, poor quality ranking algorithm. This thesis contributes to the area of optimizing crawler’s performance by removing duplicate URLs. Removing duplicate URLs at crawling level improves crawler’s efficiency in terms of time and space also. Six popular search engines are at first analyzed for identifying the presence of redundancy in the content over 44 categories of user search interest. Further, the new algorithm based on the URL normalization in query parameter and categorization is developed. To test the effectiveness of the proposed algorithm, a proposed crawler has been developed. To compare and analyze the results another crawler i.e. base crawler based on breadth-first search has been developed. The results of proposed crawler are compared with results of the base crawler, and encouraging performance improvement in terms of crawling time, space, search engine execution time and reduction in the number of duplicates has been observed. The percentage improvement of crawling time between base crawler and proposed crawler varies from 0.086% to 17.44%. The proposed algorithm crawls a URL in a particular category which yields more relevant results. After applying URL normalization in query parameter, duplicate URL is removed which results in reducing crawling time and less number of fetched records. Thus, the proposed crawler leads to relevant results, high-level user experience and more satisfaction.