Sunday, 21 November 2010

Use Proxy For Anonymous Browsing

You have probably already come to the situation where you wanted to visit some website but not from your computer. There are many reasons why you want to browse the internet anonymously. The main reason is to hide your IP address. When you visit some website you provide a lot of information about you and your computer to that website. From the IP address it is possible to determine not just your country but also more precise location. Although it is not easy to connect the IP address with particular person, sometimes it is simply not desired to expose your location or properties of your browser.

There is a very simple way to avoid exposing your details to the destination page. You can use a proxy service to bypass your direct connection with any website. This is a service that acts as an intermediate computer which accepts your web addresses and returns requested web pages. So the target website sees the proxy computer and not your computer or your browser. This way you are not accessing the websites directly and your IP address and other data is not exposed to the world but only to one computer. The only disadvantage of browsing this way is that the communication is a little bit slower because of the additional "element" between your computer and web server. This additional element is a computer which downloads pages and sends them to your browser. But this is the only price you will have to pay for anonymity.


There are many free proxy services. Just Google for "free proxy" and you will find many websites offering this service for free. Browsing is still pretty simple. Instead of entering the wanted web address into the address bar of the browser you enter it into address field of the proxy page and press enter or click the "go" button. This website will download web page and send it to your browser. You will get exactly the same content as you would get by browsing directly. This is not true if the server returns data according to the visitor's country. In such case you will probably get some local content. When you browse through proxy you access the target website with the IP address of the proxy website which may be hosted in a different country.

Proxy servers or websites offer some additional functions for even better protection. One of them enables you to prevent storing cookies. Cookies are some data that is stored on your computer when you visit certain pages. The second protection function is to remove JavaScript code. This code runs in your browser when the page loads. You can enable or disable these options on the same page where you enter the target address.

In most cases you can browse directly because there is no need to hide yourself. But in some cases it is better to browse anonymously and not reveal your true identity. You can also use proxy websites to access your website from other countries in order to check if right ads for that country are displayed. It is up to you to decide when to browse this way. Anyway, proxies enable us to effectively hide out location and computer data.

Tuesday, 16 November 2010

FM Radio - Free Music on Every Corner

Radio is probably one of the most popular free services. We can listen to it just everywhere. At home, at work, in the bus, in the car, while we walk, etc. With the term "radio" we usually mean a sound service that we can receive and is usually free, without any subscription or expensive equipment. With radio you also get free music. You can listen to it while working or you can relax and enjoy listening to your favorite band. There are many ways to get such free sound service. Each has some advantages and disadvantages but the fact is that the old FM radio as we know it is far the most widely used broadcasting service.


You can listen to radio stations that are broadcasting over internet. This service is still free, but you need access to the internet and a computer. There are also some devices (network players) that support internet broadcasting, but you still need access to the internet, either wireless (WiFi) or over local network. The advantage of internet broadcasting is that there are thousands of stations so it easy, at least in theory, to find a station that plays music according to your taste. Cable and IPTV systems also offer radio stations but this is limited to the place where you have the connection.

Satellites also offer a lot of radio stations. But to receive satellite stations you need a satellite receiver, so this type of reception is only suitable for home listening. There are also mobile satellite radio services, but in general they are not free. You have to pay a subscription. Despite the fact that satellite services are cheap and you get large coverage areas, this type of broadcasting is not suitable for general reception. Satellites do offer a huge choice of stations, but the receiving equipment is pretty complex.

Therefore, terrestrial broadcasting is the most popular platform for radio. While AM broadcasting is still used in some countries, mainly for international broadcasting, it is the FM radio that we can find it on every corner. There are many reasons for its popularity. It is free, you only need a simple and cheap receiver. Receivers are now integrated also into mobile phones, MP3 players and many other devices. The quality of FM (frequency modulation) is very high. With quality stereo reception and some high-end receiver you get superb sound quality comparable to vinyl records.

Because of many advantages of terrestrial broadcasting there are many stations interested in using this media. Of course, the radio-frequency spectrum is a limited resource and only a small fraction of the whole spectrum is allocated to broadcasting. Therefore, we can fit only a limited number of stations into this band. In addition to this, to prevent harmful interferences there are many strict rules on frequency planning which further limit the number of possible radio stations that we can receive.

But despite all the disadvantages and limitations the FM radio will be used for many years to come. Interestingly, there is still no comparable digital broadcasting system that could replace the old analog radio.

Wednesday, 3 November 2010

Web Hosting Facts

Web hosting is more than just putting some files on the web server. It is also a relationship between website owner and web hosting provider. The purpose of websites is to continuously offer access to web pages. Serving web pages without interruption or failures all around the clock. Of course, 100% up-time is not our goal and it would cost us a lot of money. A reasonable expectation is about 99.9% up-time. This means that on average 8 hours per year the system could be down. This is acceptable and doesn't cause much trouble to either website owner or visitors.

The things change when there are frequent malfunctions and the website is inaccessible for hours or even days. This is where we can start distinguishing good hosting from a not so good one. If the hosting provider doesn't have good support then the website owner quickly becomes frustrated and if the problems are not solved within few hours the web hosting service looses that customer. Loosing many customers means also loosing reputation. Bad reputation is the last thing a web hosting service provider wants.

The next important fact about hosting providers is the service they offer and/or advertise. To attract customers many web hosting providers promise unlimited "everything". Is this possible? Well, yes and no. Theoretically they don't limit disk size, bandwidth, number of domains, etc. The truth is that the web server is a physical machine, a computer with limited hard disk, limited processor capabilities, limited amount of memory, i.e. the computer is a limited resource. In addition to this, our web server is not hosting only our website (unless we have such hosting plan). Usually it hosts hundreds of websites. All the server resources are limited and shared between all websites hosted on this server. So you should understand properly the word "unlimited".

Another important fact about web hosting is connectivity with the internet. The connection between hosting provider and ISP should not be some bottleneck which limits how fast our pages load. The hosting provider should have a fast and reliable connection with the internet backbone. Pages that load slowly and even more slowly when the server is loaded are not visited frequently. We might loose visitors only because of poor internet connection of our web hosting provider.

Before we decide for web hosting company it is a good idea to check all key factors that contribute to fast and reliable hosting. We can contribute a lot to speed up our pages if we limit the resources that are needed to load the page. Pages without all the bells and whistles can also be useful to the visitors.

Sunday, 31 October 2010

Web Content Management Systems

In general, a Content Management System (CMS) is a system that manages work flow, usually in a collaborative environment. This system is a collection of procedures that simplify some complex or repetitive tasks. These procedures can be manual or computer-based. Website management is one of the tasks that needs some system to create, edit and manage content. Web pages are a typical example of content. You need some editor to create pages, some database to store them and some system to generate and retrieve pages when they are needed.

Web Content Management System (CMS) is a web application designed for creating and managing HTML content--web pages. Web CMS is used to manage a large collection of web resources (text, HTML code, images, PHP scripts, etc). Web CMS functions usually include: content creation, content control, content editing, maintenance functions, and functions special to each CMS. In general, web content management software applications provide authoring tools designed to allow users with little or no knowledge of programming languages or markup languages to create and manage content with relative ease. Therefore you don't have to know anything about HTML, CSS, PHP, JavaScript, AJAX or other fancy codes.

Web content management systems use a database to store content, metadata, or additional data that might be needed by the system. A web CMS usually contains a presentation layer which displays the content to web-site visitors based on a set of templates. A template is a basic page layout that contains content place holders, page styles and other page data that does not change.

Most web CMS applications use server side caching to boost performance. This works best when the web content is not changed often but visits happen on a regular basis. Administration of web CMS is typically done through web-based interfaces, but some systems require the use of a desktop client. A web CMS typically requires an experienced administrator to set up and add features, but is primarily a Web-site maintenance tool for non-technical administrators. It allows non-technical users to easily make changes to a website with little or no training.

There are many open-source web Content Management Systems you can download and use for free. Some well known free CMS systems are Wordpress, Joomla, and Drupal to mention only the most popular. Those mentioned are based on PHP scripting language. There are also other CMS applications based on other languages like.NET, ASP.NET, Java, Java Bundle, Perl, Python, Ruby on Rails, etc.

From only viewing the web page you never know for sure how it was created or generated. Creating web pages with powerful CMS is pretty easy and fun--you can forget about code and HTML tags. You can now focus on content and creativity.

Thursday, 28 October 2010

Radio Frequencies - What Do They Mean and Why Are They Important?

Radio frequency spectrum is a natural and limited resource. Radio waves are a mean to transfer information from one point to another without using any media. One of the most important properties of any waves is their wavelength or frequency. Radio frequency determines the position in Radio frequency spectrum and hence all the properties of radio wave propagation and potential use.

Because radio waves travel across country borders and may interfere with other radio waves there are many rules, frequency plans and procedures that define how to use radio frequency spectrum to avoid interferences. Because different frequencies have different properties there are some general harmonized frequency bands that define main purpose of the band and basic technical parameters of transmitters using these frequencies. The so called allocations are accepted on international levels and provide basic rules for frequency usage. Each allocation is then further refined and countries may have special agreements on how to use specific frequencies.

One of the most popular services using radio frequencies is terrestrial or satellite broadcasting. Radio and television are a well know and established way to send picture and sound with radio waves. Because we usually want large coverage areas with few transmitters we are using high power transmitters on high transmitting sites. Large coverage areas also mean coverage across the border.


This is a very important fact because in the same area there can be only one transmitter that can operate on a particular frequency without causing interference. Of course, there are special cases like digital broadcasting and single frequency networks where nearby transmitters operate on the same frequency without causing interference, but for analog broadcasting careful frequency planning is a must.

For broadcasting frequency bands there are many special regional agreements that very precisely define particular frequencies allocated to each country, procedures to be used to modify the plan and also many rules that have to be respected to avoid interference.

When we would like to listen to a particular radio we need to know the frequency on which this radio broadcasts. A frequency is like a street address where each house has its own number. For television the same applies. However, for practical reasons we usually do not operate with frequencies directly but we use channels where each channels number represents one (central) frequency with some channel bandwidth.

In general, most consumers are not aware of all the technical details that regulate frequency usage and are used to provide many wireless services. They only expect good music, quality movies and mobile phones that work anywhere.

System Optimization - How to Make Your PC More Efficient

You just got a brand new PC with the latest Windows and you are amazed how fast it runs. Then you start adjusting Windows properties and installing programs that you need. After a while your computer is complete. Everything works, all operations are fast and you focus on your work. A common observation after a year or even before is that the computer is not so fast as it used to be. You start thinking what could be wrong. The hardware hasn't changed so it seems that some component is causing trouble. You go to BIOS settings only to confirm that all parameters are at initial values and nothing has been changed. But it is pretty obvious that programs start slower, opening and saving larger files takes a lot of time and with some operations you are getting annoying messages.

It is very likely that the PC hardware is still working as it was when the computer was new. The problem is with the operating system (Windows) that gets cluttered with many unnecessary files and sometimes also the disk gets fragmented. There are few simple tricks that can help to get your PC back into shape.

The most common problem is that you have installed many programs that you don't need anymore. You should uninstall anything that you don't need. Go from the first installed program to the last one and ask yourself when was the last time when you actually needed this program. Unfortunately some programs do not clean all files and registry entries that were created during installation process. Therefore, you should also check the folder in which they were installed and manually delete all the files and folders that belong to the uninstalled program but are still there.

After you clean the disk you need to clean the registry. This is a central Windows storage for various system and user settings. Huge registry is a common bottleneck that can slow down all operations. Unfortunately, you can not clean it manually. There are many good registry cleaners around. Google for it and choose one that fits your needs. Usually such cleaning programs have many functions to clean also other parts of the operating system like temporary files. Registry/PC cleaner program will remove all the references to the programs that don't exist anymore and will also delete all unnecessary files.

The third step in making your PC more efficient is to run the disk defragmenter. This program is included in the Windows and will optimize used locations on the disk where the files are saved for faster loading.

After these steps are done your PC should run faster and you should also have more space on your disks.

If you are still not satisfied with your PC then you can try to improve the size of your desktop area. If you have an old CRT monitor or a small display then it is maybe time to invest into a big LCD monitor. Big working area means comfortable work and less switching between open windows. Check http://computerlcdmonitors.org/ for more information about computer LCD monitors.

Wednesday, 20 October 2010

Turbo Pascal Internals

Do you still remember Borland Turbo Pascal? The most successful Pascal compiler ever? If the answer is yes then you probably remember that Turbo Pascal was indeed Turbo. A very fast compiler which generated quite fast executable code. It featured integrated development environment (IDE) where you could edit, run and debug your programs. Originally created as Blue Label Pascal by Anders Hejlsberg, it was licensed by Borland as Turbo Pascal in early 1980s. Last version for DOS was released in 1992 as Turbo Pascal 7.0. Borland also released similar compiler for Windows and later a brand new product Delphi--Rapid Application Development tool for Windows.

Have you ever wondered what makes Turbo Pascal a fast compiler? Well, if you are interested in Turbo Pascal internals here are described some basic units consisting this compiler. Turbo Pascal design is not conventional as taught in compiler design books. Its design is oriented toward speed. The parser is tightly connected with code generator. There is some low-level intermediate code, but most of the raw code is generated by the parser.


A great example of data structures and algorithms used in Turbo Pascal is TPC16. TPC16 is a Turbo Pascal compatible compiler. It generates unit and executable files compatible with Turbo Pascal 7.0 command line compiler. TPC16 is written in Turbo Pascal. It is consisted of many units described below.

Common Variables
Here are declared all global variables that are used by the compiler. Declarations are grouped into sections. Some sections declare variables which hold data for particular module which is compiled and these variables are saved (pushed) during processing of used units.

Reserved Words
This unit declares a symbol table holding all reserved words. It is separated from other units because the file includes also hash values for reserved words which are generated by a separate program.

Type Definitions
This unit defines data structures that define basic Pascal types (integer, real, extended, Boolean, char, string, file, set, array, object, etc). Unit also contains functions to test type compatibility, symbol table type storage and type processing.

I/O Utilities
Routines for reading and writing files are located in the I/O Utilities unit. This unit also contains procedures for error handling and error reporting.

Symbol Table Management
Symbol table management is one of the most important internal operations in every compiler. This unit contains procedures to create symbol table, insert identifiers into symbol table, search symbol table for some identifier, search unit, procedure or record scope for particular identifier, and all other operations with symbol tables.

Scanner
Here are located procedures that scan source file and generate a stream of tokens. Token is the smallest element of the source which is processed by the parser. Scanner processes source files, skips comments, processes compiler directives, and generates tokens.

Parser
This is the brain of the compiler. It reads tokens, checks language syntax and generates intermediate code. One of the reasons for fast compilation lies in parser. Since it generates code which is close to executable code, the next steps of compilation are pretty fast.

Statements
This unit processes Pascal statements (for, while, repeat, while, etc.), begin-end blocks and assembler blocks.

System Functions
This unit processes system functions like abs, arctan, chr, int, odd, ofs, etc. and generates code for them.

System procedures
This unit processes system procedures like exit, fillchar, writeln, inc, val, etc. and generates code for them.

Expressions
This is the biggest unit and contains functions and procedures to process expressions. Expression is anything that needs to be calculated--from a simple identifier to complex expression in multiple parentheses. There are many cases that need to be tested and processed. Procedures in this unit also generate the majority of the code.

Calculator
This unit contains procedures to process various calculations. Calculation is some operation with one or two expressions (addition, subtraction, multiplication, etc.)

Assembler
This is the inline assembler that processes instruction in an asm-end block.

Assembler Types
This unit declares data structures that are needed for inline assembler.

Code Generator
This unit processes intermediate code and generates executable code and reference data needed in linker. Code generator also performs code optimizations.

OMF Import
This unit imports OMF object files and processes OMF records.

Linker
This unit generates the final executable code and creates output files. Before the code is generated each referenced item is recursively processed and marked. Unmarked items are not executed and therefore not included in the final executable code.


There are many excellent books on compiler design and implementation. However, the best book on compiler design is the compiler itself. If you are interested in Turbo Pascal internals or just need a source code of some real compiler then you should examine the Turbo Pascal compiler source code - Turbo Pascal compiler written in Turbo Pascal. This source code shows all the beauty of the Pascal programming language and reveals all the tricks needed to build a fast and compact compiler for any language, not just Pascal.

Friday, 15 October 2010

LCD Vs HDTV - Is There a Difference?

Well, there is a big difference. In fact it is difficult to compare two so different things. They are all related to television picture but mean different things. Years ago there was mainly only one television "definition" or television picture resolution (actually there were few of them but pretty similar) which was in technical circles known as standard definition (SD). The only available type of displaying TV picture was Cathode Ray Tube (CRT) display. The things were pretty simple despite the fact that there were few color TV standards (PAL in Europe and in many other countries, NTSC in USA and Japan, SECAM in Russia and France). In last years all TV sets were capable of displaying all TV standards used around the world.

With the introduction of the digital TV broadcasting television has changed significantly. There are now many different services that were not possible with analog broadcasting. HDTV is one of them. HDTV means High Definition TeleVision. High definition means picture with more lines and more pixels (points) in a line. The good old SD television picture uses between 525 to 625 lines depending on the system used. This system was good enough and survived for decades. The only improvement was addition of colors in late 1950s (or early 1960s in Europe). Two facts contributed to this pretty long life of the old standard definition TV. The first one is than there was no easy way to add features or to modify basic picture or transmission parameters. The second one was that the picture was pretty good (less true for NTSC used in USA and more true for PAL used in Europe). With the introduction of digital broadcasting one of the first major improvements was the introduction of HDTV. It means a picture with more lines. HDTV picture brings more sharpness and more details.


LCD on the other hand is a technology to display TV or computer picture. There are currently two technologies to display HDTV picture: plasma and LCD. Each has some advantages and disadvantages. You should not be concerned by technical details that mean nothing to you. If you are choosing a new TV set then simply take a look at the picture. Compare different models, features and prices. Usually larger TV sets use plasma technology while smaller ones are using LCD displays.

Now you know that LCD is one of the technologies to make panels that can display HDTV picture. LCD panels are used not just for TV sets but also for computer monitors and smaller displays in embedded systems.

It is also possible to watch TV on a computer LCD monitor or to use plasma or LCD TV set as a computer display. You can find more about the LCD technology at the http://computerlcdmonitors.org/ website.


Sunday, 10 October 2010

The Significance of SEO

SEO or search engine optimization is not something essential for your website. If you have a blog or a simple homepage then you have probably promoted your site at some places and you are getting visitors on a daily basis. However, if you have a website to sell some product or service then you need as much traffic as possible in order to increase chances to make a sale. In general, there are two ways to get traffic to your website: paid advertising and SEO.


Paid advertising is the simplest way to get visitors but it costs money. There are many advertising systems like Google AdWords where you target specific keywords or search phrases and each time some web user searches for your keyword your ad is displayed. If the ad text is attractive enough and the user clicks on it, he comes to your site. You don't have to do anything with your website, you only need to pay for each click. The cost can vary, from few cents up to few ten dollars.

The other way to get traffic is completely free. It means traffic from search engines. But in order to get this traffic you need to be present on top positions in search engines, preferably on the first position because this place gets most of the clicks. To get to top position for specific keyword you need a page that is optimized for this keyword. This is called Search Engine Optimization--SEO.

SEO is not rocket science. It means some simple approaches to emphasize relevant keywords (on-page optimizations) and link building (off-page optimizations). SEO starts with keyword selection. You should target keywords or search phrases that have low competition and high search volume. The easiest approach to create optimized page is simply to forget about search engines and to create an attractive page with easy navigation and quality unique content. After the page is created you can do some simple SEO tweaks to make sure the main keyword is present in the title, meta description, that header tags are used for section or paragraph subtitles, etc.

But even if you primary goal is not to get a lot of traffic it makes sense to do some SEO to your site just to make sure it will appear in search results when somebody will google for information available on your website. If web users will not be able to visit your site unless they click on a link or enter web address, you will get only few visitors. And for only few visitors it doesn't make sense to invest time and effort to create a website, isn't it?


A simple example of an optimized page is the http://digitaldatarecovery.org/ website. The layout is very simple which means fast loading, important page elements contain main keyword digital data recovery, and to human visitors and search engines it should be clear what this page is about.

In many cases SEO means only few simple modifications of the page to emphasize relevant keywords. Of course, off-page optimization and link building is also important but this needs more time and efforts.

Tuesday, 5 October 2010

Google Answers All Questions

Sooner or later you come to the situation where you have a problem, some information is missing or you are simply curious and would like to know the answer to a simple question. Since early 1990s there is one universal and almost unlimited source of knowledge - world wide web. There are billions of pages about every imaginable topic. For every question you may ask, there is at least one web page that has answer to that question.

But having billions of pages is of no use if you can not find the information you are looking for. A solution to this problem are web search engines. They crawl all the pages and index them. They are collecting information about the content of all the web pages around the world. But even this huge database of all the web pages is of no use since there are thousands of pages on any subject. It is very difficult to find what you are looking if there are thousands of pages writing about your subject.

The key element in successful web search is to serve relevant results. This means showing only those pages that may have some useful content about the keyword you search for. This is a very difficult task. This is also the point where all the web search engines start to differentiate. There is one search engine that excels in every point of view. Google.

Google is currently the only search engine that maintains high quality of the search results. A key element of this success is the PageRank algorithm. It is based on the number and quality of the links to some particular page. Of course, there are many factors that contribute to the relevance of some page. Quality and unique content is probably the most important. The fact is that Google is able to provide relevant web pages for any query.


The key success in finding what you are looking for is using Google and right keywords. Keywords are the other half of the success. This means that if you are looking for some specific subject you need to find the right keywords to get relevant results. This is not an easy task. You should refine your search keywords according to the result you get. Simply open first few pages from the Google results and you will soon understand what keywords are related to your subject. All you have to do is to use them with your next search. This way you will come closer to the wanted page.

On the web you can find anything. All you need is patience and right keywords. Google offers a universal machine to answer almost all questions. It is a giant encyclopedia. All you have to do is to open it.

The author is a big fan of Google and likes to create websites about popular topics. One of his projects is about cheap garden furniture which provides information about various outdoor furniture including aluminium garden furniture.

Remember, on the web you can find the answer to almost any question!

Thursday, 30 September 2010

Compiler Optimizations

The task of every compiler is to translate high-level language source code to machine code that will run on target processor. This may be achieved through assembly language file or by linking object files, the ultimate goal of every compiler is to generate code for the target processor. In principle this is a simple task. Every high level statement can be translated in a series of target instructions. However, without some optimizations this code would be very inefficient. Unoptimized code still works, but it is slower and the files are bigger.


The nature of high-level statements is to operate with variables. Loading and storing of variables can happen in any order. But transfers from and to memory are slower comparing to transfers between registers. And if some value is stored to memory location and immediately after it is needed again in a different calculation, then it doesn't make sense to load it again since it was already present in some register. With proper register loading we can save a lot of redundant loads and stores. There are many optimization algorithms to make the code as efficient as possible. In fact, the compiler optimization is a science. There are many books written on this subject.

Most optimization algorithms are based on control and data flow analysis. There are many optimization approaches: to reduce jumps, to remove dead, redundant or common code, to remove redundant loads and stores, to use registers efficiently, to replace slow instructions with faster ones, to replace specific arithmetic calculations with short instructions, etc. Each optimization is based on some code or data properties. Some of the common optimizations include: constant folding, integer arithmetic optimizations, dead code elimination, branch elimination, code-block reordering, loop-invariant code motion, loop inversion, induction variable elimination, Instruction selection, instruction combining, register allocation, common sub-expression elimination, peephole optimizations, etc.

The basic rule of every optimization is that the original functionality of the program should not be changed. All optimization algorithms are based on the assumption that the program under analysis is the only one who changes memory locations. In reality interrupts or hardware ports can break this rule. Therefore proper actions must be taken to prevent optimizations on memory locations which might be modified outside the code we are trying to optimize. An additional programming technique that complicates optimizations is using pointers. But with proper analysis it is possible to apply safe optimizations on most parts of the code.

You can optimize the code for speed or for code size. In each case the program runs faster or occupies less memory. The later is extremely important in embedded programming where the memory of microcontrollers is limited. Compiler optimizations are an important part of every compiler. Unoptimized code is a waste of memory and time on any target system.

A practical example of  compiler optimizations in action is the Pascal compiler for 8051 microcontrollers which uses Turbo Pascal syntax and generates optimized code. If you are interested in practical compiler construction you can examine the Turbo Pascal internals. This is Turbo Pascal compiler written in Turbo Pascal which can be used as an excellent book on compiler design.

Monday, 20 September 2010

Home Web Hosting - Access Your Home Server From the Internet

If you are a webmaster developing or maintaining websites then you probably have a home web server for development or testing purposes. An old computer can easily be transformed into a Linux server. If you install also Apache, MySQL and PHP applications you will be able to host websites at your home. Of course, this is not the same hosting as offered by commercial providers, it is only a place where you can install, develop and test web applications. With some additional settings you can access your home computers from any place that has internet access. This is very convenient for getting files you forgot at home.

Assuming that you already have a local area network (LAN) including the router, which is a bridge between your computers and the internet, you need no additional equipment. In order to enable web access from the internet, you have to configure your router to forward outside web requests to the your server. The default port for web is 80, but you can choose any other port number to increase security.


The first prerequisite for remote access is a domain. This is the name of your home server on the internet. There are many services that offer free domains and free updating of IP addresses. This is especially important if you don't have a static IP address. Most internet providers dynamically assign IP addresses which change once a day or with every connection. Since knowing your external IP address is crucial to access your computers from the internet, you need a way to update the domain DNS records.

Many routers support popular dynamic DNS services like DynDns. If your router has no such function you can still install simple software utility on your computer which will update address every time it will be changed. You should create an account, select preferred domain name and choose your name which you will use as a sub-domain. This account data you should enter into the router settings or external software for IP address update.

Now you have access to your router. Each time you will enter your name followed by the chosen domain, you will reach you router. If you have enabled access to its user interface from the external port, you will be able to configure it as you do it from your home computer. To reach your server you need to configure port redirection. If you would like to use the standard port 80 for web access, then simply create a rule to forward external (public) port 80 to the local port 80 at IP address of your web server. This is everything that is needed to access your web server from anywhere.

For hosting of real websites especially if they are commercial in nature you need a reliable hosting like Hostgator. Simply because the website needs to be available 24 hours a day and downtime in the case of hardware failure should be as short as possible. Commercial hosting companies usually provide professional service for few dollars per month. This includes unlimited number of domains, unlimited disk space and many other goodies.


Having a possibility to access home computers from any location is very useful when you need some files or data from home. Even if you have a dynamic IP address you can access your development websites and show them to your friends or clients.

Saturday, 18 September 2010

What to Look For in a LCD Computer Monitor?

In general, it is difficult to say which computer component is most important. Probably there is no such component. Every part is needed to perform some task otherwise we wouldn't need it. But there is one difference. With some computer components we have direct contact. One of such components is monitor. Computer monitor communicates with us in a very special way. It displays all the information about the status of the computer as well as windows of currently active applications. Therefore, it is essential that this computer component is carefully selected and that it displays stable and clear picture.

Currently LCD is the predominant technology for computer monitors. There is no big difference between computer LCD monitors and LCD TV sets. The only major difference is the additional interface electronics that makes a LCD TV set look like a television and LCD monitor to look as computer monitor.


Usually, when buying computers we look at the price tag. This is normal since price ranges can vary significantly. But the price should not be the only parameter upon which we will make a decision. When buying computer monitors we should consider the following:

Main Purpose of Our Computer
We can use the computer as office tool, gaming machine, designer's drawing board, or a combination of listed and also other purposes. Each purpose needs emphasis on a different parameter. For example, designers working in desktop publishing need large desktops and realistic color reproductions, games need monitors with fast response, etc. The first step in choosing monitor is to define the main purpose of the computer.

Size
The bigger the better. This simple rule is valid for all computer purposes. With larger working area you will easily work with many applications at the same time. Application windows will not be squeezed to small cluttered rectangles, the taskbar will look as a nice informative bar and not like a bar with many small buttons, on the desktop you will be able to put more icons for frequently used programs, etc. Because all LCD monitors are flat there will be no problem with the space on the table. So the only disadvantage with large monitors is maybe higher price.

Resolution
Again, the bigger the better. The display resolution tells us how many pixels or points the monitor can display. More pixels means larger monitor or smaller image dots. A typical resolution for multipurpose computer is 1920x1200 pixels. It allows comfortable work with office programs, web browsing and also good experience with CAD programs.

Interfaces
Almost all computer LCD monitors have DVI input. This is standard for computer graphics cards. If we intend to connect multimedia devices we also need a HDMI input. Many monitors have also integrated USB hubs. This is very handy to connect keyboard, mouse and external disks. Some computer monitors have also a TV tuner which converts them into a real TV set.


Choosing computer monitor is not an easy task even if you know what to look for. For more tips and hints you can visit the http://computerlcdmonitors.org/ website which provides few basic facts about computer LCD monitors.

Perhaps the most important fact you should remember is that you will be staring for hours into this flat panel. Make sure it will look pretty.

Thursday, 16 September 2010

Universal JTAG Cable

Every electronic device around has some microcontroller or other programmable device in it. Even kitchen appliances or toy has some kind of electronics which controls important functions. In some cases this electronic device has predefined functionality which can not be changed. But in most cases this electronics contains a processor that uses flash memory to store code. And if a device is programmable then you can change the program it runs at any time.


Of course, in many cases this programming is not needed since the electronics performs what is supposed to. But in some cases it makes sense to allow end users to update the device with the latest firmware. Therefore, it makes sense to have a universal interface that can be used in factory and at home to program electronic devices with simple additional hardware. This universal interface is JTAG. This is a standard that defines a serial interface to transfer data between a device and computer (or between devices). The JTAG interface needs only four basic signals. This makes it easy to use PC parallel port as JTAG interface. While this works many modern JTAG programmers use USB port to connect to the PC.

Another beauty of the JTAG interface is the possibility to connect many devices in chain. This way you can program all the JTAG enables devices on the board with only one interface. In any case you need a JTAG cable to connect to the device. Each device may have own JTAG pinout but this is not a problem since JTAG cables can be made (almost) universal--at least for few device types. Unfortunately, JTAG cable is not enough. You need also a suitable software that will program the target device.


JTAG is not used only for commercial electronics. It is also a standard interface for all embedded systems including FPGAs, microcontrolelrs and various memories. Each manufacturer has some kind of JTAG cable to program their devices. Unfortunately, these cables use dedicated software and can not be used for other purposes.

One of the very popular uses of the JTAG interface in consumer electronics is to reprogram some router models with new firmware. There are many cheap JTAG cables that can be used to bring a new life for an old router. Changing original firmware is very popular not just for routers but also for other highly popular electronic devices. The Xbox is such example where a simple JTAG hack can change some built-in functions.


There are many JTAG cable supplies around. Before you decide for specific cable you need to check the JTAG pinout and software support. Usually, cheap cables are made only for specific devices and use the parallel port signals to program the device. More advanced JTAG cables use USB interface or at least a buffer to boost signals from parallel port.

Tuesday, 14 September 2010

Digital Television and Computers - Watch TV on Your Computer Monitor

Digital television broadcasting uses satellite and terrestrial platforms to deliver content to the viewers. Digital television is available also via cable and IPTV networks. Most people use TV sets to watch their favorite channels, but there is an alternative way of watching television. If you have a computer you can receive and decode all available channels including premium services if you have a suitable subscription card.

There are at least two methods to upgrade your computer for television reception. You can either use an internal radio/TV card or a small USB stick. Both hardware solutions work well and are not expensive. USB stick is more suitable for use with laptops but you can also use it with a desktop computer as a permanent solution. There are many different models that support different technologies. For terrestrial broadcasting in Europe and many other countries your device should support DVB-T. DVB-S in needed for satellite television and DVB-C for cable reception. Many cards and sticks also support analog FM radio and analog TV. Before you purchase such card or stick you should check which standards are used in your country.

Many internal cards have on-board audio/video decoding support. This hardware decoder does the work which is otherwise very time consuming and processor intensive. USB sticks have no hardware decoder, there is only a demodulator that supplies the DVB transport stream. While the main computer processor will have to decode the sound and picture, this is also an advantage. It is much easier and cheaper to update decoding software than to replace decoding hardware.

Most TV cards and sticks are provided with a remote control. It is very similar to the real TV remote control and enables you to turn your computer monitor into an advanced TV set. Every television decoding software allows you to display the TV picture over entire screen so you can enjoy as in front of standard TV set. The only disadvantage of this way of watching TV is that while you watch full-screen TV you can not use the computer for other work.


Receiving television channels with computer has also one big advantage. You can easily record any channel. You can also record one channel and watch a different one. This is all possible because the software is able to decode many different services simultaneously. Of course, this also depends on the speed of your processor, but in general recording of particular channel or entire stream is not a problem.

Digital television is much closer to computers than it was in the old analog days. Digital broadcasting uses codecs which have roots in computers. Watching television on the computer is nothing new. In fact, digital broadcasting is only a network of many dedicated computers and peripheral devices.

Sunday, 12 September 2010

Computer Forensics - How Volatile Data is Analyzed

Computer forensics plays an important role in fighting terrorism and criminal activity. The fact is that bad guys use computers, internet and other modern communication tools to communicate and to store their plans. We would be naive if we would think that they can barely open Word or Excel. They are aware of all the risks and they protect themselves with modern encryption algorithms and general protective measures. Fighting criminal activities is very different from discovering occasional violations on company computers.


Many traces can be hidden if the software used for criminal activity or otherwise unwanted is not present on the computer disk and runs in the memory of the computer. It is very easy to start some process and then successfully cover all traces that were left behind. In such case analyzing disk data makes no sense because nothing suspicious could be discovered. The only solution to this problem are tools that can protect volatile data like live memory.

The static analysis of computer data (i.e. the analysis of a hard disk removed from the computer) is usually not enough because many advanced techniques can be used to erase all traces from file systems and the only relevant data remains only in memory. Theoretically, it would be possible to freeze computer memory by liquid nitrogen and this would significantly increase chances to recover the data but this approach is not practical. Analysis of live volatile data in a computer is essential for any serious forensic analysis.


There are many open source and professional commercial forensic tools that can make a snapshot of crucial volatile data for later analysis. Such tools can discover open ports, virtual disk drives, VPN connections and other resources not visible to the normal user. In some cases also the whole disk drive or individual partition can be encrypted so it is important to make an image of it before the system is shut down. Once all the data is safely stored it can be analyzed regardless of the state of the computer.

A logical question would be, for example, what can be done to successful hide some processes running in the computer memory? Theoretically, it would be possible to eliminate traces from the memory when the process is not active or when it waits for some input. But even for such approaches there are some solutions. It is possible to create memory snapshots at periodic intervals and sooner or later the secret process will show itself.

For many computer users the most requested computer forensic service is recovering lost files. Check the http://digitaldatarecovery.org/ website for some practical tips on digital data recovery and practical advices to recover important files from broken hard disks or flash memories.


Computer forensics is becoming increasingly important part of the efforts to detect and prevent terrorist activities. But the game will never end. More advanced hiding techniques will lead to more advanced discovery techniques which will lead to even more advanced hiding techniques, etc.

Friday, 10 September 2010

Popular Microcontrollers

Embedded systems are not just complex projects in electronic laboratories--they are present in everyday devices. Every mobile device, electric toy or kitchen appliance has some electronic board which usually includes a programmable device--microcontroller. This is a special microprocessor with peripheral devices and I/O ports. Depending on the volume of the device the manufacturer can decide whether to develop an ASIC--a dedicated integrated circuit which performs all functions for this device or to make a standard board with discrete components. In both cases some microcontroller is used, either as a soft core in ASIC or a standard integrated circuit.

Popular microcontrollers are popular mainly because of availability of affordable development tools and low prices of devices. When hobby engineers start using one family they get used to it and it is very likely that they will use it later in a professional project. While PIC and AVR microcontrollers are heavily used in hobby projects, ARM has prevailed in the professional embedded world. However, 8051 microcontrollers are still used in many hobby and professional projects.

There is a plethora of choices from open-source projects to various IP cores with significant royalties for each device. Despite this choice there are few microcontroller families that are popular because of their flexibility, powerful development tools or because of historical reasons.

ARM


This is currently the hottest RISC core used in almost all mobile phones, portable devices and many other applications. It has powerful instruction set, low consumption, offers easy integration and there are many good development tools for easy development and debugging. The ARM core is also used in many popular microcontroller families from Atmel, Luminary Micro (now Texas Instruments), NXP and many other manufacturers. These microcontrollers are very popular among embedded engineers and are used in various applications from automotive industry to hobby projects.

AVR


This is one of the most popular microcontroller families from Atmel. It is also very popular among hobby engineers and it is used in many projects from simple LED controllers to complex communication devices. The RISC architecture offers fast execution and low power consumption. Development tools are available for free which is a great bonus for electronics enthusiasts. AVR is a direct competitor to Microchip's PIC. Some favor AVR, others like AVR. There is no clear winner. Both families work well. It is up to the developer/programmer what he like or prefers.

PIC


This is a leading microcontroller family from Microchip. PICs are available in very small packages with only few pins and also as powerful 32-bit microcontrollers with many peripheral modules and I/O pins. They are very popular among hobby engineers--in hobby projects you will find either AVR or PIC.

8051


This is a very old 8-bit microcontroller architecture that has managed to survive for more than 30 years. Many excellent compilers, a lot of code examples and simple development has contributed to the popularity of this family. This core is still used in many modern microcontrollers from Silabs, NXP, Atmel and many other microcontroller manufacturers. It is very likely that the 8051 is the most widely used core in embedded applications. Of course, many new designs will probably use ARM or some other advanced architecture, but because of popularity of the 8051 family in the past and availability of development tools (C, Assembler and Pascal),  it is still used in many applications.

Internet Service - Wireless Vs Wired

It is amazing that roughly 20 years after first commercial use of the internet we are now totally dependent of it. This dependence in many cases evolves into addiction. There are many popular services that we are using on a daily basis. Facebook, Gmail, and Twitter are only few popular names that wouldn't exist without internet.

First home access to the internet was via modems and telephone infrastructure. At that time speeds were adequate for websites and services that were available. The next step was replacement of modems with xDSL technology which significantly increased the range of available bandwidths. The latest approach is using fiber cables to get the fastest internet speeds in individual homes. 1 GBit/s is now reality for anybody having the possibility to connect to the optical network.

Another technology that evolved in parallel with wired access was, of course, wireless access to the internet. The most common technology today in use is Wi-Fi which allows speeds over 100 Mbit/s. All you need for a wireless access is an access point (usually a wireless router) and WiFi enabled laptop or any computer with wireless network card.

For average user there is probably no big difference between these two ways to access the internet. However, there are some very important aspects that need to be taken into account when deciding which technology to use.

Wired access provides constant bandwidth between individual user and internet service provider. It is mainly the bandwidth of the ISP's backbone which determines actual speed that we will be able to achieve. Wired access enables permanent connection. This is important when you need reliable connection which should be also available from the outside. Static IP address is usually used for such purposes and if the connection is important it is also powered by an UPS.

The main difference of wireless internet access is that it uses a radio channel to transfer data. This channel has limited capacity which is usually shared among many users. Even if there are many channels available the total bandwidth that we can get depends also on the number of users and their transfers. In general, this is an additional bottleneck for our access to the internet. Another significant difference is that the connection is not permanent. When it is established there is no guarantee that is will stay as it is. The situation in the radio frequency spectrum can change, other users may start using the same access point, slight change of antenna position may significantly decrease connection speed, etc.

One typical example of a device that can successfully use wireless connection is a wireless credit card reader. It enables credit card processing away from the office or counter. Wireless credit card terminals significantly simplify making business and allow buyers to keep their credit cards in their hands.

In many cases you can use either wired or wireless access. But there are also cases which imply reliable wired connection or wireless access at any cost.

Thursday, 9 September 2010

Life With Internet

Internet was born long before we have noticed it. Today we can not imagine life without it. There are many people who believe that internet is world wide web (WWW) and vice versa. Of course, web is only one of the services that are available over the internet. The fact is that our lives depend on the internet services even if we don't have a computer. It it difficult to imagine life without internet as it was difficult to imagine life without electricity thirty years ago. Internet is a worldwide network. And only a network. Services are what helps us to send a message or to do something.


Internet Service Providers
In order to connect to the internet you need a contract with some internet service provider (ISP). Access to the internet can be made available over analog phone line, ISDN, various xDSL technologies, fiber to the home (FTTH), wireless networks, cable network, etc. The speeds vary but it is not uncommon to have 100 Mbit/s or even 1 Gbit/s via FTTH. Regardless of the connection speed, internet access is a must.

Email
Sending electronic messages is one of the very popular internet services. In order to send and receive a message you need at least two mail servers, one for sending and one for receiving the message. Since email address is a prerequisite for any business, not just online, each company has at least one mail server. Fortunately there are many free email service providers. One of the most popular and famous is Gmail, free email from Google. If you don't have a Gmail account you are missing a lot.


World Wide Web
WWW is probably the most widely used internet service. This is probably the reason that www means the same as internet to many people. Having a homepage is a must. Not just for companies but also for communities, clubs, organizations and also individuals. Writing a blog is very popular these days. Because of this popularity there is a huge competition among web hosting providers. The prices are low, setting up a website is pretty simple and just anybody can have his own website.

Spam
Spam is not an internet service but it is a direct consequence of cheap or even free internet services. It is so easy to create an email and send it to any address. And sending the message to millions of worldwide email addresses is no different. Email spam is a big problem. There are many companies that are offering anti-spam products or services. One excellent anti-spam filter is used by Gmail. You will rarely receive a spam email. But there is also spam on the web. There are millions of pages that offer no useful content. When you are searching for something it is very likely that you will land on such page.

Of course, there are also other services on the internet. Sending IP packets is simple and every modern appliance or consumer electronic equipment has an ethernet connector. We can expect a huge expansion of communication over internet.

Tuesday, 7 September 2010

Convert Your Old Computer to a Linux Server

Linux is a very popular platform. Not just because it is free but also because it is reliable and supports anything you can imagine. A popular setup is a Linux server without any graphical user interface. It can be used for web hosting, as a file server, as a database server, or for anything you need. Most people comfortable with Windows operating system are afraid to start thinking in a different way. In fact, installing and using Linux is pretty simple.


Once you decide to go for it you have already made the first step. The next step is to get some basic information about installing Linux. There are many Linux distributions. One that is very popular is Ubuntu. Simply Google for "ubuntu server" and learn about what do you need to install Linux. In general, things are pretty simple. You can install Linux on almost any machine. Your old computer that was replaced some time ago is a perfect choice for Linux. You only need some space on the hard drive, a CD or DVD drive, a network card and a lot of patience.

The first step is to make a bootable CD with the latest Ubuntu server image. Download the image file and burn it on a CD. Then you boot your computer with this CD and start installing Linux. It is a good idea to do this installation next to your main computer with internet accesses. This way you will be able to browse for any problem you may encounter. The most important thing you should know is that for every question you may have, there is an answer on some web page waiting for you. You only have to find it.

The installation process is pretty straight-forward. If you don't understand what the installer is asking you then simply select the default option. Of course, you can also ask Google for it and then choose appropriate option. You should understand that the Linux principle is very different from the Windows one. But once you become familiar with Linux shell and basic commands it will be very easy to work with Linux and to install and configure new software. In most cases the server will be located at some remote place and will not need a monitor. You will access it via network.


Having a Linux server is a great upgrade to your home network. This server will be your reliable storage for large peer-to-peer files, web server for website development or a computer to play with. And remember, sooner or later you will encounter a problem. Something will not work or you will not know how to change some setting. All you have to do is to search for the answer on the web. Web pages offer a giant encyclopedia on Linux.

Sunday, 5 September 2010

Advantages of Joomla Content Management System

Most bloggers, internet marketers and individuals use the web to publish content. They usually don't care about HTML, CSS, JavaScript or PHP. Web content management system (CMS) is a complete platform to create a website from scratch. It provides the framework needed to communicate with the database, to store and retrieve data, to dynamically create pages, to deal with authentication and to do other tasks needed for any website. The main advantage of using CMS is that you can focus on content. You install the system, select template and few plugins and you are ready to go. One of the very popular web systems is Joomla.

Joomla is a universal free open-source content management system. Universality means that you can adapt it for almost any purpose. Bloggers usually choose Wordpress for the blogging platform. This is a natural choice, but you can also make blog with Joomla. The basic installation provides the backbone for a working website. The page layout can be set by installing appropriate template. You can choose among thousands of free templates or pay small fee for a professional customizable design. Additional features and functionality can be added with various extensions. Depending on the extension function they are available as components, modules or plugins. Extensions make Joomla extremely universal.


The beauty of extensions is that there is one for every imaginable function or feature. Whenever you need some functionality for the website you can go to the Joomla extensions page and search for the right module. Some extensions are available free of charge, for some you will have to pay. And if for some reason you can not find a suitable extension you can modify an existing one or you can create a new one from scratch according to your needs. For this you will need to know a little about HTML and PHP but this is not a problem since both languages can easily be learned from the existing code examples.

Another nice feature of Joomla is customization of the core system. With code overrides you can change default layout of most page types. If this is not enough you can modify any Joomla file to achieve the desired functionality. For this kind of customization you need to take into account that any upgrade may overwrite modified files. But if you are a skilled web programmer this will not be a problem. Since Joomla is an open-source platform you can modify it any way you like. Most customizations can be done without knowing any of the web languages, but if you know them you will have a very easy and pleasant task.


A simple example of Joomla CMS in practice is the http://fullspectrumbulbs.org/ website. This is a small site about full spectrum bulbs which uses basic Joomla system with a lightweight template. One of the benefits of such simple layout is fast page loading. Of course, there are few extensions installed but they don't provide any crucial functionality for visitors.

Joomla is a fantastic content management system. With few extensions you can make a website for any purpose. But if your goal is a blog you will probably use Wordpress. Not because it is better than Joomla but because it is easier to create a blog with it.

Friday, 3 September 2010

Details About Typical Television Resolutions

Television resolution is information about the number of picture elements (points or pixels) that make up TV picture. The higher the number the better and more sharper is the picture. In general there are two groups of television resolutions. The standard definition television (SDTV) as we know it for the last 50 years has few different standards that didn't change since they were defined. The only "upgrade" was the addition of colors which did not modify anything related to the picture resolution. The high definition television (HDTV) means picture with more lines and picture elements. There are even higher resolutions (beyond HDTV) but the technology is still in infancy. Currently we are mainly talking about SD and HD television.


Standard Definition Television - SDTV

There are few different standards that were defined in the age of black and white television even before the World War II. Those standards define parameters like the number of lines, the number of pictures per second, and some other technical details. In general, the number of picture lines together with frequency bandwidth reserved for picture defines the television resolution. NTSC (the color television standard used in USA and Japan for SDTV) has resolution of 480 lines each with 720 pixels. In Europe and many other countries where PAL was used the resolution was 576 lines with 768 pixels in each line. Because this resolution was quite satisfactory and there was no easy way to enhance it, the old analog technology survived until recently when we started a transition to digital broadcasting.

High Definition Television - SDTV

Digital broadcasting allows us to use many advanced services and one of them is television picture with higher resolution--high definition television. HDTV means any TV resolution higher than SDTV. There are few standard resolutions that are supported by professional and commercial equipment. Each resolution has two modes: interlaced and progressive. Interlaced means alternatively displaying odd lines in one frame (still picture) and even lines in the next one, while progressive mode displays all lines in each frame. The basic HD resolution is 1280x720 which means displaying 720 lines with 1280 pixels in each and the highest HD resolution is 1920x1080. There are also few intermediate resolutions which are mainly used by smaller display devices. This means that the highest HD resolution is 1920x1080p which is supported by larger LCD and plasma displays but currently there is no terrestrial television that broadcasts this way. Most HD televisions use either 720p or 1080i mode.


Of course, HDTV is not the last step in television resolution. As the technology will evolve there will come higher resolutions that will need higher bandwidths, new transmission equipment and new displays (TV sets). This is called development.

Digital terrestrial television in Europe uses DVB-T as transmission standard. Some countries use MPEG-2 coding standard while other use even more efficient MPEG-4 standard. An additional improvement can be achieved by using statistical multiplexing which dynamically allocated bandwidth to each service according to current needs.

Wednesday, 1 September 2010

A PC Based Digital Storage Oscilloscope

Digital Storage Oscilloscope (DSO) is an indispensable tool to analyze complex signals or debug electronic devices. Because the signals are sampled and stored into memory it is very easy to examine them and their relations. DSO is in fact a special computer with many fast A/D converters and software to analyze and display signals. With DSO the problem of displaying short-time signals and glitches is history. It only depends on your settings and trigger conditions to capture the right moment. Storage capacity is rarely problem unless you would like to store longer intervals.


Because of the computer-based design of the DSO it is pretty simple to convert ordinary PC into a DSO. PC Based digital storage oscilloscope is in fact software which runs on a PC with suitable interface for analog signals. This interface is usually a small box with fast A/D converters connected to the parallel or USB port. PC based oscilloscopes have many advantages over classical DSOs. Since the capabilities of the software are limited only by the interface and computer resources it is possible to track many channels in real time, analyze FFT spectrum or decode buses like SPI, I2C, JTAG or UART.

The physical limitations of a PC oscilloscope are mainly defined by the external hardware that is used to sample analog or digital signals. The main parameters that define PC DSO capabilities are number of channels, bandwidth, sample rate and sample memory size. The sampled data is stored into buffer from where it is transferred to the PC where it is analyzed and displayed. Therefore, at least in principle the buffer needs only to be large enough to store samples until they are transferred to the PC. Only in principle because the parallel port is not fast enough to transfer huge amounts of data in real time.

A big advantage of PC based oscilloscopes is upgradeability of the software. You simply install new version and you get bugs fixed or new functionality. Some PC oscilloscopes also allow you to write your own plugins for custom decoding. The average price of PC oscilloscope is significantly lower than the price of a real DSO but it is still higher than the price of the PC where it will run. Nevertheless, PC DSO is a compact, cheap and universal solution for hobby or professional electronics laboratory. It allows you to analyze arbitrary signals, to decode popular serial protocols and to store signals for later processing.


A special version of PC based oscilloscope is the I2C analyzer. This is a simplified device which samples few channels and decodes I2C, SPI, UART and other popular protocols. An important function is the I2C host adapter mode which allows you to send I2C or SPI packets and act as a master device.

I2C analyzers are much cheaper then PC based DSOs and many such devices can also be used as a logic analyzer with basic functionality.

Tuesday, 31 August 2010

The History of Light Bulbs

One of the first electrical effects used to produce light was incandescence. This is emission of light from a heated body. First electrical incandescent light was created in 1802 by British chemist and inventor Sir Humphry Davy. He used platinum strip through which he passed electric current. Platinum was chosen because it has a very high melting point. This incandescent light had two major flaws which prevented practical applications. The light was not bright enough and it only lasted for a short time. But this experiment is important because the first practical incandescent bulbs appeared almost 80 years later.

During the 19th century many experimenters tried various materials and designs. It was British scientist Warren de la Rue who came to the idea to put platinum filament into a vacuum tube. Vacuum is essential because it prevents air molecules to react with the filament which reduces its life. Unfortunately this invention was still not practical because the high cost of platinum. Many patents for incandescent bulbs were granted for various implementations including the one with carbon filament. Because many rivals were working on similar projects some tried to bypass patents which lead also to few law suits.

Many technical problems had to be solved in order to make a bulb for commercial production. High vacuum is essential for long operation. Until 1870s there were no pumps which could make a satisfactory high vacuum for light bulbs. With the use of Sprengel pump it was possible to easily achieve required vacuum. This pump was one of the key factors that contributed to the success of incandescent light bulbs. The material used for the filament is also very important. It must produce bright light, have long life and should be cheap enough for mass production. Many bulbs at that time used carbon filament which was far from the ideal material. In 1904 tungsten filament was patented and Hungarian company Tungsram started production. It was also found that if the bulb is filled with inert gas it has higher luminosity and the effect of blackening is reduced.

Today this kind of bulbs is produced in millions. Unfortunately, cheap production is the only advantage of incandescent bulbs. They are very inefficient--only few percent of the electrical energy is converted into light, the rest is dissipated as heat. There are some attempts to increase the efficiency of incandescent lamps but this will not change the overall picture of inefficient lighting. Therefore, many countries have taken steps to replace them with more efficient compact fluorescent lamps.

Most bulbs create light which is slightly colored. But there are also full spectrum bulbs which can reproduce natural sunlight. Such bulbs are used in environments where accurate color reproduction is important. But full-spectrum light bulbs can also be used at home. For many people natural white light creates a very pleasant living environment.

Compiler Design - Practical Compiler Construction

Good optimizing compiler is a must for any computer platform. The availability and quality of compilers will determine the success of the platform. Compiler design is a science. There are numerous books written about compiler principles, compiler design, compiler construction, optimizing compilers, etc. Usually compilers are part of the integrated development environment (IDE). You write a program in some high-level language, click compile and a moment later you get executable code, assembler listing, map file and a detailed report on memory usage.

As a programmer and IDE user you expect fast compilation and optimized generated code. Usually you are not interested in compiler internals. However, to build your own compiler you need a lot of knowledge, skills and experience. Compiler construction is a science. Compilation is a symphony of data structures and algorithms. Storing compiler data in proper structures and using smart algorithms will determine the quality of the compiler. Compilers usually have no user interface, they process source files and generate output files: executable code, object file, assembler source, or any other needed file.

Why are compiler data structures so important?

Compiler needs to store identifiers in symbol tables. Each identifier has some attributes like type, size, value, scope, visibility, etc. Compiler must be able to quickly search the symbol table for particular identifier, store new identifier into symbol table with minimum changes to it, etc. To satisfy all these requirements it is mandatory to carefully design symbol table structure. Usually hash tables are used for quick search and linked lists for simple addition or deletion of elements. Symbol table management is one of the critical elements of any compiler. Another important internal data structure is intermediate code representation. It is a code that is generated from the source language and from which the target code is generated. Intermediate code is usually used to apply optimizations, so right form of intermediate code with general syntax and detailed information is crucial for successful code optimizations.

To convert source code to target instructions is not a big deal. A limited set of registers can sometimes require some smart register saving techniques, but in general each source language statement can be translated into a set of target instructions to perform required effect. If you do not take actions to optimize the generated code you will have a very inefficient code with many redundant loads and register transfers. Excellent optimizations are the shinning jewel of any compiler. With some average optimizations you can easily reduce generated code size for 10% or more. Code size reduction in loops means also increased execution speed. There are many algorithms for compiler optimizations. Most are based on control flow and data flow analysis. Optimizations like constant folding, integer arithmetic optimizations, dead code elimination, branch elimination, code-block reordering, loop-invariant code motion, loop inversion, induction variable elimination, instruction selection, instruction combining, register allocation, common sub-expression elimination, peephole optimization and many others can make initially generated code almost perfect.


If you have ever tried to write a compiler then you probably already know that this is not a simple task. You can use some compiler generators, or write the compiler from scratch. Compiler construction kits, parser generators, lexical analyzer generators (lexers), optimizer generators and similar tools provide the environment where you define your language and enable the compiler construction tools to generate the source code for your compiler. However, to make a fast and compact compiler you need to design your own compiler building blocks, from architecture of symbol tables and scanner to code generator and linker. Before you start reinventing the wheel it is a good idea to read some books about compiler design and examine the source code of some existing compiler. Writing your own compiler can be a great fun.

There are many excellent books on compiler design and implementation. However, the best book on compiler design is the compiler itself. Take a look at Turbo Pascal compiler written in Turbo Pascal. This source code shows all the beauty of the Pascal programming language. It reveals all the tricks needed to build a fast and compact compiler for any language, not just Pascal.

Monday, 30 August 2010

Recording Studio Software Selection

When you start looking for some recording studio software you are usually focused on nice screen shots, fancy descriptions and price tag. However, most people never check the most important factors when selecting recording software. You should first ask yourself why do you need the software, what features are the most important for you, do you already have a computer or you would buy a new one, do you need compatibility with other recording studios, will this software work with your audio hardware or do you need to purchase also a new, dedicated sound card, etc. Selecting right recording studio software is not a simple task.

Why do you need recording studio software? Because every home or professional recording studio uses computers for audio production. Audio computers and software are indispensable tools in every recording studio. You can use computers with appropriate software not just for recording but also for editing, adding effects, sound synthesis, filtering, mastering, archiving, transfer, etc. It is also possible to build a cheap home recording studio with ordinary PC and some popular audio recording software. Computers and audio recording software have become an essential part of every recording studio.


What features do you need? Well, this depends on the individual requirements and studio type. Every recording software supports recording, editing and playback. You should decide if you need more emphasis on audio recording features (level matching, spectrum analysis, audio compression, conversion tools, mastering, etc.) or you need more emphasis on MIDI devices, instruments and sampling (sequencers, wave table synthesis, drum machines, musical notation, etc.). Many recording studio software solutions have good support for both, audio operations and musical instruments.

Do you already have a sound card or you don't but can afford to buy a new one? Most music recording software works with standard sound cards that are supported by the operating system, while some of them work only with proprietary audio hardware. One such example is Pro Tools which works only with special Digidesgn or M-Audio hardware. The basic rule is that you first select your audio hardware according to the needs of your studio. The next step is to find a suitable audio recording software. However, in some cases these steps can be reversed. If you know exactly what software you need or would like to have then you need to find a sound card that is supported and has the connections compatible with your recording studio equipment.

If you don't have a computer yet, then you might have a dilemma. Mac or PC? The answer is not simple. Both kinds work well and are found in recording studios. Most recording studio software solutions work on both platforms but not all. If you already have a computer that you intend to use then this question is redundant. Otherwise it is a good idea to check availability of chosen audio recording software on Mac and PC.

Do you need to transfer your music projects to another studio? If the answer is yes, then your software needs to be compatible with software in studios you intend to work with. A typical example of standard recording studio software which excels in compatibility is Pro Tools. All versions of Pro Tools support the "Pro Tools file" so you can easily transfer projects between studios that use Pro Tools. Of course, every audio software supports standard audio file formats like wav and mp3 for reading and for writing. The compatibility between different recording software is declared on the level of project files. This means that you save your unfinished project in one studio and open it in another one where it can be finished.

Price? The price of the most popular audio recording software starts at about $100. This is a very small amount comparing to the total price of your studio equipment. Therefore, the price of the software should be one of the last factors when selecting software for your studio.

Selecting the right recording studio software can be a difficult task but you can simplify it by knowing what exactly you need and how you will use the software in your recording studio.

You can find more information about software used in recording studios at Recording Studio Software website which is dedicated to recording studios, computers and software. Here you can read more about Macs, PCs, recording studio software selection, recording studio design, and you can also check supported features and compare various recording studio software.