Page 117 - MARSIUM'21 COMP OF PAPER
P. 117
94 Lukbar & Hashim (2022)
1.1 BACKGROUND OF THE PROBLEM
Search engines are recognized as a platform for promoting and advertising a website or company via the internet (Berman, Ron,
and Zsolt Katona., 2013). A search engine that collects information based on a website index and keywords from a website (Genaro, Susan,
2015). These keywords serve as a summary of a website's content. These keywords will be stored in the search engine software's database.
Users can obtain hundreds of results from a single keyword search. Users, on the other hand, often use just the results on pages 1 and 2. So,
web developers should seriously consider these SEO strategies to take advantage of these benefits and compete with other competitors. This
critical local SEO strategy should be implemented by hotels that wish to stand out from the competition and generate more rankings, visitors,
and reservations (Taylor, 2019). Aside from the importance and potential of SEO techniques to increase website rankings on the SERPs,
which have been established in the literature, limited study has been conducted to investigate in depth how the presence of SEO techniques
in every multiple hotel website in South Sumatra is in the SERP. Understanding how SEO works in favor of a hotel website is essential for
its improvement. As a result, the purpose of this research is to look at several hotel websites in the province of South Sumatra and their
performance.
1.2 RESEARCH QUESTIONS
Does the number of (a) number of Internal Links; (b) number of External Links; (c) number of Google Indexed Page & (d) Total
daily page view have positive relationship with hotel website findability?
1.3 RESEARCH OBJECTIVES
To test the relationship between hotel website findability and the (a) number of Internal Links; (b) number of External Links; (c)
number of Google Indexed Page & (d) Total daily page view.
2.0 LITERATURE REVIEW
2.1 SEARCH ENGINE
2.1.1 DEFINITION OF SEARCH ENGINE
A search engine is a computer program that collects information from a website index and keywords (Genaro, Susan, 2015). Search
engines are meant to assist users in finding content stored on a website by utilizing keywords that are a summary of the website, and those
keywords are then stored in the search engine software's database. As a result, searching for a single keyword might get hundreds of results.
And, on the Search Engine Result Page, the search results will be ranked based on a quality website (SERP). In theory, search engines,
according to Ledford (2007), may be separate into two parts: back-end and front-end.
- Back-end is a software program that gathers content across multiple sites via the usage of programs. The data gathered generally
consists of keywords or phrases that are probable indications of what is included on the web page as a whole, the URL of the
page, the code that makes up the page, and connections into and out of the page. This data is then indexed and keep in database.
- Front-end, the program has an UI where users may put a keyword to try to discover particular information or content. When a
someone hits a search button, an algorithm analyses the data in the back-end database and extracts links to websites that will pop
up to match the search phrase the user-provided.
2.1.2 THE OPERATION OF SEARCH ENGINE
2.1.2.1 CRAWLING
A web-based search engine operates by storing the information from several websites, which it then retrieves. A web crawler, often
known as a crawler or a spider, retrieves these pages by following each link on the website (Sambana, 2016). Crawling the web is the
beginning stage in this operation. The search engines begin with a seed collection of known good quality websites and then explore the links
on each page of the website to identify new website pages. The Web's link structure connects all the sites that become accessible as an
outcome of someone connecting to them. Search engines' automated robots, known as crawlers and spiders, can access the many billions of
interconnected pages via connections. The search engine will then load those additional pages and evaluate the information on those pages
as well. This step is repeated indefinitely until the crawling process is completed. Because the web is a big and complicated area, this process
is immensely difficult. Each day, search engines do not attempt to crawl the whole website. Crawlers may become aware of pages that they
choose not to crawl since they are unlikely to be found in a search result (Enge, 2015).
2.1.2.2 SEARCH INDEX
The next point in this operation is to create a term index. This is a huge database that records all of the significant keywords on
each page scanned by the Crawler. A lot of additional information is also stored, such as a map of all the pages that each page is linked to,
the clickable text of all links (known as anchor text), whether those links are considered advertisements, and so on. To handle the huge effort
of storing information on lot of pages that may be accessed in a fraction of a second, search engines have built big data centers (Enge, 2015).
2.1.2.3 RETRIEVAL AND RANKING
The next stage in this operation happens when the search engine delivers a list of relevant Web sites in the order that it feels is
most likely to fulfill the user. This method needs search engines to comb their collection of trillions of documents and accomplish two things:
first, return only the data that is relevant to the searcher's query; and second, arrange the results in descending order of perceived importance
94

