Apr 07, 2011 — 3 comments
Every single website on web must be optimized ethically, because many search engines block sites that are being optimized by black hat techniques of SEO. Many SEO company in india gives both type of services to there clients.
The tutorial describes a way to implement a Web Crawler in Java; a program which crawls through the internet and indexes web pages for later retrieval by a search engine. The focus is on proper concurrency - that is how to process multiple links at the same time using a breadth-first algorithm without running into threading issues. For now, the links are graphed out and displayed, simple indexing will be implemented in subsequent versions.
Since Java 5 working with RMI (Remote Method Invocation) is very easy. You don’t need the rmic compiler unless you work with legacy RMI clients. Now stubs are generated automatically at runtime. Let’s see a very minimalistic example.
Many times it is useful to be able to get programmatically information about the network interfaces present on a host. Java standard library include a number of classes designed to provide access to this information. The most important is java.net.NetworkInterface which suffered a major face lifting in Java 6. Now it is possible to get a lot more information about every network interface in the system. Of special importance is the ability to get the MAC address of an interface.
Help us out! More and more tutorials are submitted to Good-Tutorials each day. We could use your help with finding good tutorials. Mind lending a hand?