How dynamic DNS works

Tuesday |

One of the important factors about the DNS is that it may be required to be dynamic especially when a particular domain is assigned to a computer with a changing IP address. In order for the DNS to have the capacity to be updated in real time, a system was created in order to address these required changes. The dynamic DNS therefore allows the domain name data to go through these modifications in real time.

How the dynamic DNS works is based on how DNS works although the dynamic systems integrated to the whole function allows more flexibility in its features. The importance of this function is that this gives the possibility for other sites to be able to establish a connection without necessarily to continuously track the IP addresses by this specific server. This is deemed helpful for sites that are also dynamic such as the consumer-focused Internet service providers. As an integral part of the Active Directory, the dynamic DNS needs to establish its maximum caching time by a few minutes in order to avoid any incidences of retaining old addresses in the different in the sites’ respective DNS cache. This also helps in the speed of contact between the servers.

When it comes to implementing Dynamic DNS in server operators, it can be initially done by miximizing the caching time or the intervals between the caching of the domains by a shorter period; in this case, the caching can be set to about one or two minutes.

Implementing dynamic DNS is also made possible through DNS hosting services where their serves retain current addresses in their database and provide client program that sends updates when changes in the IP addresses take place. Specific features in dynamic DNS hosting also depends on the service provider and the firmware that is used. For example, routers are designed to support specific dynamic DNS features such as the UMAX Ugate-3000, the first router that was able to support dnynamic DNS.

Another important note on the dynamic DNS is described by RFC 2136 as a protocol through the nsupdate utility. Although the dynamic DNS can be considered as an important feature for particular sites, an updating DNS can be also considered dangerous for security and stability reason. This is to say that it is important to utilize a feature that can authenticate dynamic DNS updates such as TSIG and using the HMAC-MD5 hash key.

Alternatives have been also designed such as Microsoft’s GSS-TSIG that usees Kerberos for authentication; this also automates the installation of hash keys. Hence, the GSS-TSIG has become a proposed standard and operating systems that run on Microsoft Windows 2000, Widows XP and Windows 2003 only uses this authenticated dynamic DNS support system. Hence, dynamic DNS has actually created a niche in different markets, especially when it comes to its potentia in the server ector. The dynamic DNS has also evolved with the continuous developments on Internet technologies, especially when it comes to better speeds and the available computer technology; this therefore gave an opportunity to the home server markets which has taken advantage of offering services for dynamic DNS functions. Hence, it is possible for small websites to become a host a dynamic IP address by merely installing relevat server setups which can still function despite its amateur level.

BIBLIOGRAPHY:
‘Dynamic DNS’. 2008. Wikipedia. [Online] Available at:
http://en.wikipedia.org/wiki/Dynamic_DNS
‘What is Dynamic DNS?’. N.d. The Tech-FAQ. [Online] Avaiable at:
http://www.tech-faq.com/dynamic-dns.shtml

Internationalized Domain Names ("umlaut" domains)

|

Domain names need to be recognized by the system; however, not all characters are able to be recognized through the typically used alphanumeric numbers. In this case, internationalized domain names have made text usage in DNS systems more flexible by means of including additional characters which the DNS can also read and translated. The intention of these domain names is basically to contained non-ASCII characters such as texts that contain diacritics and characters from non-Latin texts such as Arabic and Chinese.

As previously mentioned, these domain names, within the framework of the DNS, functions by means of representing the IP addresses through more recognizable name through text. Since that the text used is not necessarily universal, the tendency is that the initial system is restrictive.

The deployment of the IDN initially started in October 2002 when the Interet Engineering Streering group approved the publication of RFCS 34490, 3491 and 3492 which are the protocols on Internationaliing Domain Names in Application (IDNA). Basically, through the IDNA, the support for non-ASCII IDNs that are found within the DNS would be established. This will then allow the creation of DNS that do not have ASCII characters. By March 2003, the Internet Corporation for Assigned Names and Numbers (ICANN) and a host of IDN-implementing registries developed a set of universal "Guidelines for the Implementation of Internationalized Domain Names” in which its fist version was published in June 2003.

One of the important factors addressed by these guidelines are the potential risks of cybersquatting and duplication of registered domain names which can bring confusion to consumers. In addition to this, the guidelines also made it a point to respect local languages and character sets that may be used in the domain names. Hence, one initiative was the deployment of language-specific registration and adminisration rules which can be accessed by the public. The registries also need to recognize the agreements with ICANN in addition to the fulfillment of requirements according to the guidelines.

Although initives on IDN seem to have only started recently due to the growing global response to the Internet, the idea was actually already conceived as early as 1996 by M. Duerst, with its implementation taking place in 1998 through T.W Tan and his team.

How the IDN works is initially to maximize backward compatibility. However, systems that support IDNA are only the ones that have the ability to read such domains although they can access non-ASCII sites. When an application is IDNA enabled, the system is able to convert from ASCII to non-ASCII and vice-versa, with the ASCII format utilized for DNS lookup. However, users are also able to read non-ASCII forms.

Conversions between languages are made possibel through ToASCII and ToUnicode which are both algorithms. In IDNA, each domain name level is applied separately per code. Each label can be also processed separately especially if a specific domain label is a non-ASCII; the processing of the conversion can be handled by different software such as Nameprep and Punycode.

It is also evident there are the advantages and disadavantages of using IDNA. Apparently, the IDN has allowed a more universal and flexible means to utilize language that are non-Latin in nature when it comes to the creation of domain names. However, the disadvantage can be found in the potential risks that come with the utilization of IDN application. These risks include cybersquatting and spoofing since that the utilization of full Unicode names can lead to the cretion of spoof sites or sites that replicate other sites. The challenge in this is that even the domain name and the security certificate can be spoofed in this format. What made this possible is not necessarily due to the faulty system offered by Unicode but rather the similarities of texts despite some interface differences; this shows that characters that look alike can be read similarly although the minor differences such as small punctuations and umlauts can be overlooked when reading certain data.

BIBLIOGRAPHY
‘Internationalized Domain Name’. 2007. ICANN. [Online] Available at:
http://www.icann.org/topics/idn.html
‘Internationalized Domain Name’. 2008. Wikipedia. [Online] Available at:
http://en.wikipedia.org/wiki/Internationalized_domain_name#Internationalizing_domain_names_in_applications

How Web Servers Work and what solutions and technologies are available

Saturday |

It is entirely possible that you have used internet in the past, or indeed you could be using it now. But there are few of us who consider the whole process from the time one clicks "enter" into a browser to the time the page requested makes its way right into the computer screen. At the very basic level, it can be explained that the process involves some kind of interaction between the web browser and a remote server. After someone has keyed in the web address, or URL (Uniform Resource Locator, which invariably looks something like http://www.blahblah.com) into a browser, the information is unlikely to be within that computer, so the browser will request for the page from a remote server which then “serves” the page (hence the name “server”) back to the browser. But that explanation is rather an oversimplification of what goes on “behind the scenes” to make available the web page requested within so little time. There is an intricate and independent sequence of events that combine to make the internet what it is today-headquarters of both information and misinformation!

This is an attempt to explain that intricate process, and because we fully understand that past attempts to do so have mostly failed, we will try to be simple, and speak some bit of English as well. The article will proceed on an assumption that the readers know next to nothing about web servers. Perhaps the appropriate point of departure would be to explain what a web server actually is. The term actually denotes two terms, a hardware and software. The hardware part refers to the computer or machine that is used to store the information, and the software refers to the program that runs inside the machine and which is responsible for processing requests from web browsers. The description of what a web browser is perhaps should appear in another article designed for that purpose; otherwise this article might be accused of lyrical digression if it does that.

If we go back to where we started, when someone types or clicks onto a link, several things happen. The web browser divides the URL into three parts- the address, path name and the protocol. The software installed in that particular server initiates a process of data transfer between the browser and the server itself, using the appropriate communication protocol. Communication protocol can be HTTP, an acronym for is hyper Text Transfer Protocol. The browser, after dividing the URL, then communicates with the name server, commonly known as the Domain name Server (DNS), to interpret the domain name and turns it into a numerical value known as the IP address which reveal the site’s true address in the web. After that is accomplished, the web browser may then choose the protocol to be used. The protocol is usually HTTP, or FTP-file transfer protocol. If the protocol to be used is “HTTP’ then the web browser is accordingly informed that the internet user wishes to retrieve information through web port 80, which is a port used for web page communications. FTP protocol binds the internet user to get the information through port 20. Discussing these ports would be onerous for thee are hundreds of them, but for this purpose it may suffice to say that this is a system devised for easy location of servers on the internet by a body called Internet Assigned Number Authority, and that there is nothing sacred about these ports. It’s just customary to use them as they are. The software installed in a server will is premised on the kind of the operating system installed on the server. Examples of the software mostly used in servers include Microsoft’s’ Internet information Server or UNIX.

After the browser, working with DNS establishes “residence” of the web site, it then sends a request to the web server to give it the web page requested. The specific page requested is normally determined by the specifications that come after the web address, which is after .com or .net whichever the case. So if the URL looks like http://www.blahblah.com./about .asp, the path is that part that reads “about. asp”, which basically is the specific page that the internet user wants to see. If the page is available in the server, then the server will find the requested file and run the suitable scripts, and if necessary, exchange cookies (these are small pieces of information sent by a server to a browser in order to perform certain tasks, e.g. to access a page that requires passwords and usernames) then send that page to the browser using which is usually in HTML, and the web browser formats the page into a readable/viewable one. And if the page contains images, the browser sends additional requests in order for these files to appear in the screen. In practice, it’s common for one web page to be processed after the sending and granting of 5 or more requests from a server. If the page is non-existent or for some reason cannot be accessed, the server then sends“error” messages, which so many of us are familiar with back to the browser.

That in a nutshell covers most of it, unless one is an IT student. Some might ask after that tutorial, which is the most popular web server? The web server market is virtually a duopoly between Apache HTTP server, or simply Apache, and Microsoft’s Internet Information Service (IIS). Other up-and-coming web servers worth mentioning are lighttpd, and Google web server. But these are small fish here. Apache once held a near-stranglehold in the server market, holding about 70 percent of the market share as at November 2005. Although it still holds a significant lead over its main competitor, it has seen its market share drop to about 50 percent as at December 2007.

Comparing the two web servers isn’t easy, going by the vast array of features they have, and to an untrained eye, the exercise would seem like comparing a hybrid engine to a diesel fuelled one because the two systems operate quite differently. Apache, unlike ISS, happens to be offered on an open source platform, though the Free Software Society deems some of its modules to be incompatible with General Public License. Apache definitely has the edge when it comes to popularity and as well as a rich tradition, and it is credited with playing a major role in the rise of the World Wide Web. Apache supports various features, including several programming languages including mod_perl, mod_python, Tcl and PHP, and many of its versions are modular in structure. This permits users to choose modules that are appropriate for their requirements. Virtual hosting can operate in such a manner that different websites can be hosted from a single installation of Apache. Apache 2.0 is now available on many platforms including windows. ISS on the other hand limits the extent to which one can customize functionality. ISS availability has been limited to windows environment, and ISS 6.0, an earlier version, only supported windows server 2003. However ISS 6.0 was a major improvement on ISS 5.0 which appeared hopeless against internet-savvy worms such as Code Red and Nimda. ISS 6.0 was installed with “locked down” mode settings as default settings and this helped curbed worm attack. No major attacks were reported after ISS 6.0 was introduced.

The latest version of ISS is ISS 7.0, which almost rewrote everything from the earlier versions, and has made some bold changes on modularity. In this version, only the binaries needed are installed reducing the chances of attack on the web server. The other strong points of ISS 7.0 is its easiness to scale out brought about by its simplicity in configuration which are based on distributed XML files, and this makes deployment in large-scale web hosting facilities easy and quicker.

When it comes to choosing the right web server, the debate will go on and on about which one of the two is ideal. Microsoft’s ISS certainly finds favour with the Fortune 500 companies, with majority of them using it, whereas many top internet companies seem to think that Apache is the way to go. Each of the two rivals offers compelling reasons for their use. Apache’s distributed configuration feature is called .htaccess which is a powerful tool that makes it possible for the configuration of a site to be overridden using text file in the content directory. But alas, using the feature may cause problems and in Apache’s own website, it is recommended that one avoids using it altogether. ISS on the other hand, also does support distributed configuration in web.config format. But suppose you also want to override the document for a site using ISS; here is where the difference is. The setting will be stored in the web.config file by default. Clearly, there are considerations one should make, depending on the circumstances one is in, and the choice is not always dependent on the price. For instance if you choose a system which is free but is unfamiliar to you or our staff, you may see overheads going up in the form of training costs. Plus in future one may find that he may need to hire experts to update or configure the system, further driving up the costs of what was otherwise “free” software. If one runs, a cost conscious operation but with the right support, them Apache would be the natural choice. For most small enterprises with no money to hire support, but with enough money to buy proprietary applications, then ISS would be the choice for them. ISS is almost free if one buys Windows operating system. Apache is available as a free download, and comes with many Linux distributions. It’s not all about ISS or Apache, but like the article hinted earlier, there are other web server options. Sun java System Web Server is also available for downloading without paying a dime, but only for developing, testing and staging needs. Actually, the production list for Sun Java System is about $1,500 per one Central Processing Unit. There is also Zeus technology web server available for roughly the same size as Sun Java system, but for two CPU's. But this might require an administrator who is not faint hearted, and further, if one wishes to deploy on windows, one may have to change plans as Zeus system does not allow that. Choosing the right web server system is actually a balancing act between your needs, administrative capabilities as well as one's organization's skills.

From the foregoing one can spot some of the weaknesses of each web server as implied herein. For instance, ISS is only available for windows, and until Microsoft paraded its latest version, only Apache was modular, thereby limiting ISS on customization. One might find himself confronted with high maintenance and training bills for running Apache. Thus it can be said none of these web servers is perfect and each of them has got its strongest points as well as its weaker ones.

The other topic for discussion in this issue is the web application server. Web application servers are often confused with web servers. But these workhorses are differentiated from web servers by the widespread use of server-side content (content that is generated by the server) and constant integration with database engines. In other words, these are middleware that connects software applications with the web servers. Actually upon closer scrutiny, the definition of the term “application server” has evolved. In its strict sense, an application server manages the connection between a client and server-side applications. Nowadays, the term has come to refer things such as development tools, business intelligence tools, data integration tools, e-commerce and personalization services and such things. Here is how they work. When a user, though his browser, requests a file that web application servers usually processes, the web server relays that request to the web application server which then processes the request and then ultimately sends the results back to the server. The web server then returns the results to the browser. The application servers serves to “relieve” web servers some of the work. Maybe it can be looked at this way. Requests to a single web server might run into thousands or even hundreds of thousands, this might serve to bring the web server into a virtual standstill. By doing all this “donkeywork” the web application server’s gives web developers time to concentrate on building more interactive and data rich websites which have functionalities such as generating flash application data as well as creating ecommerce websites. Today, there are over 20 web application servers, and they are commonly referred simply as application servers, perhaps to reflect that these are used in internal networks, and according to industry watchers, many of them nowadays use Java technology, apart from traditional lone-rangers such as Microsoft’s Windows Server 2003.

There are Java application servers, a JSP and servlet runner. J2EE is the other application. There are products that don’t fit into these two classifications, and they are normally known for their frameworks or language. They include Macromedia’s ColdFusion and Apple’s WebObjects. The recurring theme in all these is interoperability despite maintaining their server products. When it comes to the price, the gulf can be wide as the Grand Canyon. There are application servers that cost nothing and there are those that cost thousands of dollars per CPU. But between these two extremities, it’s possible to find application servers that are reasonably priced especially for small businesses. But whether one chooses free goodies or spends stacks of cash on these products, one thin remains certain; there is multitude of choice enough to make one spoilt for choice. This issue will attempt to compare the various application servers available in the market, but this exercise can never be regarded as definitive, but it’s a good start.

Apache Tomcat 5.x is a servlet runner, and doesn’t support many of modern features of commercial products. This is an open source web application server and is especially favored by small businesses. Caucho Resin is another application server, but with a price tag of about $1000 per server, it is not for the faint hearted. But actually it performs better than most servlet engines in major areas. Then there is Sun Java Application System that was formerly known as Sun-Netscape Alliance iPlanet. It offers a compatible platform to developing and deploying Java Web Services, naturally. It is offered on different platforms, platform 8 being free whereas platforms standard 8 and enterprise 7 costing anything between $2000 per CPU and $10,000 per CPU respectively. Zope 2.7 is another popular application server. Written in Python language, it is designed for creating content management systems. To its credit, and despite being offered on an open source platform, Zope is a powerful application platform. Macromedia’s ColdFusion is also a formidable application server. Applications can be compiled to Java and deployed on J2EE servers. It has another advantage of being an easy to manage, and has good integration with Macromedia development tools.

How DNS works

|

One of the important things to be reminded of when it comes to DNS was the necessity of its property as an open source; this is to say that the DNS has to be fault free and can be trusted by the Internet community.

Another important function of the DNS is its ability to provide a convenient solution to addressing. Given that the number of network users are increasing, the DNS is able to come up with an easier means of identification instead of using the IP addres of the computer. IP addressses are usually made up of a unique 32-bit number in dotted decimal form which was translated by the “HOSTS.TXT” table. The DNS therefore prevented the table from going through an overcapacity by effective functioning.

How the DNS works is significantly due to its essence as a hierarchical name space; hence, there is the top-level domains (TLDs), the second level domains, and so on. When the Internet was at the period of early developments, the identified TLDs depended on the name of the organization such as .arpa, .csnet, .bitnet and .uucp which were the four main organization-networks that were internetworked. Eventually, in 1986, these four groups, along with Postel and Mockapetris, initially came up with 7 TLDs: .com, .net, .org, .edu, gov, .mil and .int. These TLDs are assigned according to the function of a certain group or organization; the next chapters will explain further on these TLDs.

After the identification of a top level domain name, the second level domain name further identifies the uniqueness of an address. These second level domain name are therefore unique. For example, a number of organizations may share a .com, but only one of them will own a specific name following the .com; for example, there is only one yahoo.com and google.com. At this point, there is already a specfication of a domain although further specifying a domain is necessary as a user further navigates this specific domain. Hence, there is also the third level domain which specifies a series of other areas. At this point, it is possible that a domain already stops at third level since the created pyramid of hierarcy already allows the creation of unique paths without replicating another. The third level domain can therefore identify functions, geographical or other organization of any portion of the name space.

With this pyramid, the apex of the DNS already established thirteen root servers with each of these servers listing the IP addresses of the computers that contain the zone files fr each of the top level domain names. The hierarchy therefore makes it possible for the processing of the name-to-number distribution since there is no single central server that serves as a reference point of translation unlike the former centralized and “HOSTS.TXT” system. What happens is that through the DNS, when a user enters a query into the computer, the user “starts” at the bottom of the pyramid where a series of identification, through the entered domain name, becomes a means to create a direct path that goes through the computer of a specific domain. Hence, for example, if a user looks for the email page of the website of yahoo.com, the entered domain name identifies the .com as a commercial address (top level), and the proceeding relay of information takes place within yahoo.com instead of going through a centralized server. As a result, the hierarchy is able to identify the corresponding IP address that is specific within a certain domain.

dns-picDomain Name Space Workflow
(image from ‘Domain Name Space’, Wikipedia)
BIBLIOGRAPHY
‘Domain Name System (DNS) History’. 2008. Living Internet. [Online] Available at:
http://www.livinginternet.com/i/iw_dns_history.htm
Weinberg , J. 2000. ‘ICANN and the Problem of Legitimacy’. Duke Law Journal,
vol. 5, no. 1, pp. 187 +