Monthly Archives: February 2011

Apple’s new laptops grab Thunderbolt with impressive speed

New laptop models introduced this week from Apple include the fastest peripheral standard ever shipped in mass-market computers, providing a connection to both monitors and storage devices through a single port.

Thunderbolt, a technology Intel developed with close Apple involvement, has a data-transfer rate of 10 gigabits per second to and from a computer. Moreover, the version Apple built into these laptops has two Thunderbolt channels in a single port, for a combined raw rate of 20 Gbps in each direction.

This rate is 40 times faster than USB 2.0, four times zippier than the new USB 3.0, and 20 times speedier than gigabit Ethernet, the fastest widely available local networking standard.

Thunderbolt combines the graphics information to drive a monitor or HDTV set and the features necessary to move data at high rates between external hard drives and other peripherals. The standard also allows eight-channel audio found in high-end home-entertainment systems. (Technically, the graphic standard is known as DisplayPort and the data standard as PCI Express.)

Apple has no lock on the technology, which Intel intends to push heavily. Intel has had tepid interest so far for USB 3.0, clearly because of its development of Thunderbolt. USB 3.0 doesn’t include support for video displays.

Thunderbolt can handle two displays per port, but in the laptop version, one of those is the integral screen, which cannot be disabled in favor of an external monitor. a future Mac mini or Mac Pro, desktop machines without built-in monitors, would take full advantage of this. The Mac Pro likely will include multiple Thunderbolt ports as well.

A total six devices may be chained one to the next from the single port found on the new MacBook Pro laptops. Hubs and splitters are possible, too, although Apple has nothing to offer at present.

Forrester Research Vice President Frank Gillett said Thunderbolt has the potential to turn USB into a necessary second standard for less-expensive devices, while Thunderbolt could take the lead. The combination of fast data transfer and graphics makes it appealingly simple.

“This is what makes multiple displays easy to deal with, because you can daisy chain them,” Gillett said. while USB will persist, “what this knocks off is FireWire 800 and eSATA,” separate, far slower transfer standards.

La Cie and Promise Technology already have announced hard-drive products using Thunderbolt. (The new laptops continue to retain a FireWire 800 port.)

Thunderbolt is backward compatible with DisplayPort, letting users with existing monitors continue to use them. The standard also functions with DisplayPort adapters for analog (VGA) and digital (DVI) monitors, as well as HDTV sets.

The new professional laptops Apple introduced, in 13-inch, 15-inch, and 17-inch display sizes as before, also include an updated video camera, now called FaceTime HD. The camera allows video chat at up to 720p using Apple’s FaceTime system to another new MacBook Pro user.

adv Apples new laptops grab Thunderbolt with impressive speed

FaceTime first appeared on the iPhone 4 and later on the fourth-generation iPod touch. a beta version of FaceTime for Mac OS X was released in late 2010; the official release of the software came this week as well. FaceTime 1.0 is bundled with new laptops, and is 99 cents through the Mac App Store.

The laptops also have been goosed with much-faster processors and improved graphics systems.

Apple provided a more extensive description of its next update of Mac OS X, dubbed Lion. Lion incorporates many optional features brought over from the iPad, such as a screen of application icons that can be organized into folders, more extensive multifinger gesture support (through a trackpad), and automatic resumption of programs from the precise point at which you left off after they are quit or the system has restarted.

A full-screen program mode will emulate the feel of having an iPad turn into a single app at a time, too.

Users who dislike these features may disable or ignore them, but all Mac owners will appreciate the addition of automatic document saving and retention of older versions. (Software developers will need to update programs to take advantage of some features.)

The new release also will let Mac users exchange files using a wireless connection without both parties being hooked up to the same Wi-Fi network or any Wi-Fi network. The feature, AirDrop, relies on newer hardware that allows a computer to have a Wi-Fi connection for Internet and local access while simultaneously talking peer-to-peer to other nearby devices.

Apple has erased the difference between its regular and server versions of Mac OS X. Lion includes the features of both at the same not-yet-disclosed price. Users can add server features later — a new installation is not required — although Apple declined to provide details at this point.

Mac OS X Server once was either $499 or $999, depending on the number of simultaneous users desired, then reduced to $499 for unlimited users with Mac OS X’s current release. now, it’s free.

The new MacBook Pro laptops are available for order immediately. Lion is expected this summer, although Apple has provided no details on price, system requirements or shipping date.

Glenn Fleishman writes the Practical Mac column for Personal Technology and about technology in general for The Seattle Times and other publications. Send questions to more columns at

Tagged , , , ,

Latest Tech Updates: Tips and Ways to Speed Up A Computer System

1298872808 28 Latest Tech Updates: Tips and Ways to Speed Up A Computer System

As the computer mature, the system also slows down. you can notice for quite some time that your computer gets difficulty loading folders, files and other websites as well. when it does freezes, it’s about to tell you something. a problem exists in the program and needed an urgent attention from the user. if you are a windows user, most likely you will about to experience those. not because windows have ineffective system but for the reason that most of the cases comes from the users of windows. That only shows windows are well-liked by millions of people. Fortunately, there are also ways to speed up computer system.

Let us discuss first the numerous causes why our computer system beginning to slow down when it ages. First, the system might be tainted by harmful viruses, adware, and malwares. these computer viruses do much serious damage on the system. Sometimes, simple blockages of dust in the hardware can slowdown the computer system. even a collection of system error can cause computer freezing. not only that, because we are fond of installing too many software application; some of it are forgotten over time that are left stored in the disk space without any use. since disk spaces are filled up, systems are having a hard time opening applications. All these troubles if resolve can speed up computer system.

Being a common people is not a hindrance in troubleshooting to speed up PC system. The following tips below are made simpler so everyone can be benefited. also, if encountered any unfamiliar problems, you can always stop and ask for the assistance of a computer technician.

Tips to speed up computer system:

Make sure your hardware is sufficient

In order to have a reliable speedy computer system, your hardware must have the best bundled package in it such as processor, the operating memory and hard drive. if not, process to speed up computer system mat not as effective as it must be.

Clean up your desktop

Your desktop must be free from unwanted and unnecessary files. these files eat up available disk space thus making your computer slow.

Scan your windows for errors

To speed up computer system, use system file checking tool to clean and repair your computer from system errors it could find.

Scan for viruses, adware, and malware

Installation of anti-virus software is the only way out in this problem. a good quality anti-virus can detect and eliminate those spywares and malwares that are built to destroy system.

Uninstall unused programs

Programs that are no more important to the user must be removed from the system. again, windows memory has a limited disk space only and if loaded with unnecessary files, the computer will slow down. Uninstall everything that is not vital to speed up computer system.

These simple tips to speed up computer system could go a long way. somehow, it must be learn by the user because of the following reason. First, it is cheaper to buy a new PC unit. Then, it is something that you can be proud of, that you yourself have done something to speed up computer system.

Tagged , , ,

Eight hot rods compete for Ridler Award in Detroit

1298786413 49 Eight hot rods compete for Ridler Award in Detroit

The 59th Annual Autorama has taken over Cobo Center in downtown Detroit with hundreds of hot rods, customs classics cars and motorcycles in what has been billed as America’s greatest hot rod show. Judges have selected eight vehicles–dubbed the great 8–as finalists for the coveted Ridler Award that will be presented Sunday evening at the show’s conclusion.

The great 8 are:

— Jim Marciniak from Andover, Minn., with a 1963 Buick.

— Mike Lethert from Roseville, Minn., with a 1939 Ford convertible.

— Tim Gunsalus from West Alexandria, Ohio, with a 1947 Chevrolet pickup.

— Burce and Tony Milyard from Grand Junction, Colo., with a 1962 Corvette.

— Bruce Ricks from Sapulta, Okla., with a 1956 Ford.

— Kenny Frederick from Geismer, La., with a 1957 Chevrolet.

— Kenneth Tallent from McKinney, Texas, with a 1940 Ford.

— Derrick Samson from Marshall, Mo., with a 1951 Chevrolet.

To compete for the Ridler Award, Autorama must be the first time the car has been shown.

Another highlight of this year’s show are Hollywood Legends, including the K.I.T.T. car from TV’s “Knight Rider,” “Starsky & Hutch’s” Torino, General Lee from the “Dukes of Hazzard,” the “Ghostbusters” movie car and the Monkeemobile.


Tagged , ,

‘REV It UP’ Nationwide Auto Dealer Technician Contest Helps Instill Consumer Confidence in Auto Repairs; Attracts 6,000 Entrants

1298714411 48 REV It UP Nationwide Auto Dealer Technician Contest Helps
    Instill Consumer Confidence in Auto Repairs; Attracts 6,000 Entrants

LAS VEGAS–February25, 2011: A thorough inspection of every vehicle that enters an auto dealerservice department is vital for both consumer confidence and safety. MobileProductivity, inc., (MPi) a LasVegas, NV based firm that provides profitability tools for autodealer service departments, has found that vehicle inspections have notalways been consistently done at auto dealerships and last year started acontest to test technician skills nationwide.

Tagged , , , ,

Uniblue Speedupmypc 2011 Serial Key Number Free Download

1298613610 16 Uniblue Speedupmypc 2011 Serial Key Number Free Download

Uniblue Speedupmypc 2011 full version Serial Key Number Free Download. SpeedUpMyPC 2011 has been released by Uniblue which scans your computer for System Tweaks, Speed Tools and Junk Files which are optimized for the latest versions of Windows.

Main Tools of Speedupmypc 2011:

  • System scanner
  • RAM optimizer
  • Memory cleaner
  • Start-up manager
  • CPU booster and more…

Speedupmypc 2011 Serial Key Number:

Click here, fill the promo form and get the SpeedUpMyPC 2011 Serial Key to your email address. Note that on promo page, it is written as SpeedUpMyPC 2010 but it will work for SpeedUpMyPC 2011 version also.

Tagged , , , , ,

Corel(R) VideoStudio(R) Pro X4 — New Video Editing Software Brings Power and Simplicity to Movie Making

1298527212 64 Corel(R) VideoStudio(R) Pro X4    New Video Editing Software Brings Power and Simplicity to Movie MakingPress Release Source: Corel Corporation on Tuesday February 22, 2011, 9:00 pm EST

MELBOURNE, AUSTRALIA–(Marketwire – 02/22/11) – Corel today introduces Corel® VideoStudio® Pro X4, the new version of its powerful video editing software that lets anyone create and share professional-quality videos. with outstanding speed and easy ways to bring great-looking movies to the screen, VideoStudio Pro X4 combines power and simplicity to break down the barriers to video editing.

Simplified and Powerful Video Editing for Everyone

Whether you’re a new user or have a long-time passion for video editing, VideoStudio Pro X4 is an ideal choice with its uncomplicated approach to movie making, impressive effects and outstanding speed-to-results.

This latest version includes all-new, creative features to enable anyone to add Hollywood-style effects to their movies. VideoStudio Pro X4 offers new Time-lapse tools that let you easily deliver the high-quality, photographic look of professional productions as your movie speeds your audience through time. the new Stop Motion feature automates this traditional, time-consuming animation process, taking away the complexity of bringing objects like toys or figures to life. in addition, there are new capabilities that make it easy to create a 3D look from your 2D video, emulating the immersive feel of 3D movies on the big screen.

Making the editing process faster than ever, VideoStudio Pro X4 delivers unprecedented performance with new optimization for 2nd generation Intel® Core (Sandy Bridge) and AMD Fusion processors. For users who may not have the latest hardware, VideoStudio Pro offers the unique Smart Proxy feature as well as support for CUDA, GPU acceleration and multi-core processing to provide quick and responsive editing.

“With HD everywhere from the living room to your mobile device, we’re surrounded by stunning video. with VideoStudio Pro X4, we’re giving users the tools to confidently jump into editing their own professional-looking movies — fast,” said Jan Piros, who leads product management for Corel VideoStudio Pro. “Everything we’ve done in X4 is designed to accelerate the video editing process and let you maintain your creative flow as you realize your vision on screen. with a powerful collection of new features, VideoStudio Pro X4 dramatically expands the possibilities of what anyone, even kids, can do with consumer video editing software.”

“Today, more people are capturing and editing video than ever before,” said Kathleen Maher, Senior Analyst at Jon Peddie Research. “The most successful software packages are not only about lots of features, they’re about helping people make movies they can be proud of and distribute and show anywhere. Corel’s new VideoStudio Pro X4 is an example of this new breed of video product.”

New and Enhanced Features in Corel VideoStudio Pro X4

  • New! stop Motion animation: Have fun making movies that bring inanimate objects to life. Capture images from webcams, camcorders and DSLR cameras and use the automated tools and settings to simplify the stop-motion animation process.

  • New! Speed/Time-lapse: Easily create time-lapse effects from a series of photos or video clips and give your movies the professional look of speeding up time. with the ability to handle full-resolution files and very large-size image sets, this is a perfect tool for HD-DSLR photographers.

  • New! Processor optimization: Offering exceptional power and speed, VideoStudio Pro X4 is optimized for the new 2nd generation Intel® Core and AMD Fusion processors.

  • Enhanced! Integrated HD authoring and burning: Ideal for upgrade customers, it’s easier than ever to author HD movies to DVD and Blu-ray™ Disc with tightly integrated disc creation tools.

  • New! 3D export: Convert 2D video clips into 3D files with presets for 3D output on DVD, Blu-ray™, and AVCHD discs, as well as WMV 3D formats for mobile. Box versions also include a pair of 3D glasses.

  • New! Customizable Workspace: Set up your workspace the way you want — including across dual monitors.

  • Enhanced! Web sharing: With presets for YouTube™, Vimeo®, Facebook® and Flickr® in both HD and SD formats, it’s easy to upload directly to your site of choice.

  • New! Import/Export Movie Templates: Make your own templates and effects that you can upload and share with other VideoStudio Pro X4 users.

  • Enhanced! Corel Guide: Get the tools you need to make great movies with free in-product training videos delivered through the Corel Guide. Click on the Corel Guide inside VideoStudio Pro X4 to access an array of useful information, Help, product updates and add-ons, downloadable media packs, and more.

  • Enhanced! Smart Package: Automatically gather your entire project — video, photo, and audio files — into one folder you can take with you while choosing a custom compression method and secure with password encryption, powered by newly integrated WinZip® technology.

Delivering Video for Business

Corel VideoStudio Pro X4 delivers a complete video editing package for less than $100, giving any business the power to add online video to their marketing mix. with simple drag and drop clips, effects and graphic placement, bloggers and business users can quickly create and share custom branded templates. the enhanced Smart Package, now using integrated WinZip encryption technology, gathers all video, photo, and audio files used in a project into one password protected folder — making it easy to share your work in progress with colleagues or take it to another PC. with presets for uploading directly to YouTube, Vimeo, Facebook, and Flickr, it’s never been simpler to reach your audience.

“If you have readers, customers or partners to connect with, DIY online video is one of the most effective and affordable ways to communicate your message,” said Piros.

Supporting Learning in the Classroom

Corel VideoStudio Pro X4 makes it simple for teachers to incorporate video into the classroom or digital storytelling projects. X4 offers a 3-step interface; outstanding stability and performance; custom template creation and sharing; and easy-to-use creative tools to help kids create something amazing.

“Video is part of our kids’ language. the introduction of video editing into the classroom gives teachers a meaningful opportunity to bring literacy, conceptualization and creativity together in one engaging exercise,” said Piros. “Corel’s flexible and inexpensive academic licensing, in-product training videos and simplicity make VideoStudio Pro X4, hands down, the ideal choice for schools.”


Corel VideoStudio Pro X4 is available now in English, German, French, Dutch, Italian, Spanish, Russian and Polish. Australian pricing is $149 (AUD) for full and $109 (AUD) for upgrade customers. Commercial and education volume licenses are also available.

To download a free fully-functional trial version or for more information about Corel VideoStudio Pro X4, please visit

Media & Blogger Resources

For additional Corel VideoStudio Pro X4 resources including reviewer’s materials, images and videos, please visit

About Corel

Corel is one of the world’s top software companies with more than 100 million active users in over 75 countries. we develop software that helps people express their ideas and share their stories in more exciting, creative and persuasive ways. through the years, we’ve built a reputation for delivering innovative, trusted products that are easy to learn and use, helping people achieve new levels of productivity. the industry has responded with hundreds of awards for software innovation, design and value.

Our award-winning product portfolio includes some of the world’s most widely recognized and popular software brands, including CorelDRAW®, Corel® Painter™, Corel DESIGNER®, Corel® PaintShop Photo®, Corel® VideoStudio®, Corel® WinDVD®, Corel® WordPerfect® Office, WinZip® and Corel® Digital Studio™.

© 2011 Corel Corporation. all rights reserved. Corel, the Corel and Balloon logo, Corel DESIGNER, CorelDRAW, Digital Studio, PaintShop Photo, Painter, VideoStudio, WordPerfect, WinDVD and WinZip are trademarks or registered trademarks of Corel Corporation and/or its subsidiaries. all other product names and any registered and unregistered trademarks mentioned are used for identification purposes only and remain the exclusive property of their respective owners.

Image Available:

Tagged , ,

Tips on how to Speed Up Computer Rapidly Wall of Data » Wall of Data

 Tips on how to Speed Up Computer Rapidly  Wall of Data » Wall of Data

One of the wonderful operating system accessible inside the market nowadays is Windows despite the fact that it can be considered to be bulky but you’ll find a lot of people who are utilizing it all over the world. You’ll find times when Windows runs slowly and this really should not cause you any unnecessary worries since there are many issues that you can do to be able to speed up laptop or computer rapidly without having having to pay a person to do it for you.

In case you need to speed up your computer, you have to eliminate adware, viruses and spyware. These are malwares that can bring harm to your program and eats up a large quantity of your system’s resources. These are the primary factors that computers would run slowly as malwares take a good deal of memory usage. as a way to counter this problem, you have to make certain that your anti-virus is updated and regular computer scan need to be practiced.

Check your system’s disk space if your laptop or computer is running slowly. more frequently than not, slow systems are caused by too much files stored into the program. in order for windows to be quickly when it runs, it wants a whole lot of no cost disk space so that your personal computer will likely be more efficient and successful. There’s a tool in Windows operating program that enables you to clean up disk space. This can assist you to delete all unnecessary files like temporary internet and old files.

Defragmentation of your disk drive is yet another way that you can speed up your computer. Your challenging drive will be the 1 you use to store your information or files and it really is critical that you defragment it so that it’s going to compress all scattered files discovered inside the tough drive. there is also a tool in every Windows system known as Disk Defragmenter, which will assist you to defragment your pc.

Finally, you’ll want to clean up your Windows registry, which is also one wonderful way that you can speed up your computer’s performance. Should you be not yet conscious, registry files do get corrupted and you have to invest in wonderful registry cleaners to ensure that you are able to remove unnecessary items, which will help the operating system speed up the process of operation.

For much more data concerning how you can speed up computer quick, you are able to go to dependable websites that provide assistance on speed up my pc for individual who want to do it themselves.

Tagged , ,

Hobby OS-deving 3: Designing a Kernel

1298354423 53 Hobby OS deving 3: Designing a KernelNow that you have an idea of where your OS project is heading as a whole, it’s time to go into specifics. the first component of your OS which you’ll have to design, if you’re building it from the ground up, is its kernel, so this article aims at being a quick guide to kernel design, describing the major areas which you’ll have to think about and guiding you to places where you can find more information on the subject.

What is a kernel and what does it do?

I think it’s best to put a definition there. the kernel is the central part of an operating system, whose role is to distribute hardware resources across other software in a controlled way. There are several reasons why such centralized hardware resource management is interesting, including reliability (the less power an application has, the less damage it can cause if it runs amok), security (for the very same reasons, but this time the app goes berserk intentionally), and several low-level system services which require a system-wide hardware management policy (pre-emptive multitasking, power management, memory allocation…).

Beyond these generic considerations, the central goal of most modern kernels is in practice to manage the process and thread abstractions. A process is an isolated software entity which can hold access to a limited amount of hardware resources, generally in an exclusive fashion, in order to avoid concurrency disasters. A thread is a task which may be run concurrently from other tasks. both concepts are independent from each other, although it is common for each process to have at least one dedicated thread in modern multitasking OSs.

Hardware resources

So far, I’ve not given much depth to the “hardware resource” concept. when you read this expression, the first thing which you’re thinking about is probably some pieces of real hardware which are actually fully independent from each other: mice, keyboards, storage devices, etc…

However, as you know, these peripherals are not directly connected to the CPU. they all are accessed via the same bus, through one single CPU port. So if you want to make sure that each process only has access to some peripherals, the kernel must be the one in control of the bus. Or you may also decide that the bus is the hardware resource which processes must request access to. Depending on how fine-grained your hardware resource model is, your position in the process isolation vs kernel complexity scale will vary.

To make things even more complex, modern OSs also manage some very useful hardware resources which do not actually exist from a pure hardware point of view. consider, as an example, memory allocation. from a hardware point of view, there is only one RAM. You may have several RAM modules in your computer, but your CPU still sees them as one single, contiguous chunk of RAM. Yet you regularly want to allocate some part of it to one process, and another part of it to another process.

For this to work, the kernel has to take its finest cleaver and virtually slice the contiguous RAM in smaller chunks which can safely be allocated separately to various processes, based on each one’s needs. There also has to be some mechanism for preventing different processes from peeking into each other’s memory, which can be implemented in various ways but most frequently implies use of special hardware bundled in the CPU, the Memory Management Unit (MMU). This hardware allows the kernel to only give each process access to his limited region of memory, and to quickly switch between memory access permissions of various processes while the kernel is switching from one process to another.

Another typical example of abstract hardware resource is CPU time. I assume you have all noticed that desktop operating systems did not wait for multicore chips to appear before letting us run several applications at once. they all made sure that processes would share CPU time in some way, having the CPU frequently switching from one process to another, so that as a whole it looks like simultaneous execution under normal usage conditions.

Of course, this doesn’t work by calling the CPU and telling it “Hi, fella, can you please run process A with priority 15 and process B with priority 8?”. CPUs are fairly stupid, they just fetch an instruction of a binary, execute it, and then fetch the next one, unless some interrupt distracts them from their task. So in modern interactive operating systems, it’s the kernel which will have to make sure that an interrupt occurs regularly, and that each time this interrupt occurs, a switch to another process occurs. This whole process is called pre-emptive multitasking.

Finally, it is common not to let processes access storage devices directly, but rather to give them access to some places in the file system. of course, like allocated RAM, the file system is a purely virtual construct, which has no physical basis in HDDs/SSDs, and must be fully managed by the OS at some point.

In short, you’ll have to define which hardware resources your kernel manages and gives processes access to. It is generally not a matter of just giving processes access to hardware x or not, there is often a certain amount of management work to be done in the kernel, compromises to be considered, and sometimes hardware resources must just be created by the kernel out of nowhere, as an example in the case of memory allocation, pre-emptive multitasking, and filesystem operation.

Inter-process communication and thread synchronization

Generally-speaking, the more isolated from each other processes are the better. As said earlier, malware can’t do much in a tightly sandboxed environment, and reliability is greatly improved too. On the other hand, there are several occasions where it is convenient for processes to exchange information with each other.

Typical use case for this is a client-server architecture: somewhere in the depths of the system, there’s a “server” process sleeping, waiting for orders. “Client” processes can wake it up and give it some work to do in a controlled way. At some point, “server” process is done and returns the result to “client” process. This way of doing things is especially common in the UNIX world. Another use case for inter-process communications is apps which are themselves made of several interacting processes.

There are several ways through which processes may communicate with each other. here are a few:

  • Signals: the dumbest form of inter-process communication, akin to an interrupt. Process A “rings a bell” in process B. said bell, called a signal, has a number associated to it, but nothing more. Process B may be waiting for this signal to arrive at the time, or have defined a function attached to it which is called in a new thread by the kernel when the process receives it.

  • Pipes and other data streams: Processes also frequently want to exchange data of various types. most modern OSs provide a facility for doing this, although several ones only allow processes to exchange data on a byte-per-byte basis, for legacy reasons.

  • Remote/Distant procedure calls: Once we are able to both send data from one process to another and to send signals to other processes so that one of their methods gets called, it’s highly tempting to combine both and allow one process to call methods from another process (in a controlled way, obviously). This approach allows one to use processes like shared libraries, with the added advantage that contrary to shared libraries, processes may hold access to resources which the caller doesn’t have access to, giving the caller access to these resources in a controlled way.

  • Shared memory: Although in most cases processes are better isolated from each other, it may sometimes be practical for two processes to share a chunk of RAM and just do whatever they want in it without having the kernel going in the way. This approach is commonly used under the hood by kernels to speed up data transfers and to avoid loading shared libraries several times when several processes want to use them, but some kernels also make this functionality publicly available to processes which may have other uses for it.

Another issue, related to inter-process communication, is synchronization, that is situations where threads must act in a coordinated manner.

To get started to this problem, notice that in a multitasking environment, there are several occasions where you want to make sure that only a limited amount of threads (generally one) may access a given resource at a time. Imagine, as an example, the following scenario: two word processors are opened simultaneously, with different files inside of them. the user then brutally decides to print everything, and quickly clicks the “print” button of both windows.

Without a mechanism in place to avoid this, here’s what would happen: both word processors start to feed data to the printer, which gets confused and prints garbled output, basically a mixture of both documents. Not a pretty sight. To avoid this, we must put somewhere in the printer driver a mechanism which ensures that only one thread may be printing a document at the same time. Or, if we have two printers available and if which one is used does not matter, we can have a mechanism which ensures that only two threads may be printing a document at the same time.

The usual mechanism for this is called a semaphore, and typically works as follows: on the inside, the semaphore has a counter which represents how much time a resource can still be accessed. each time a thread tries to access a resource protected by a semaphore, this counter is checked. If its value is nonzero, it is decreased by one and the thread is permitted to access the resource. If its value is zero, the thread may not access the resource. Notice that to be perfectly bullet-proof, the mechanism in use must ensure that the value of the semaphore is checked and changed in a single processor instruction that can’t be run on several CPU cores at once, which requires a bit of help from the hardware. It’s not as simple as just checking and changing an integer value. but how exactly this is done is not our business at this design stage.

Apart from semaphores, another, less frequently used but still well-known synchronization mechanism, is the barrier. It allows N threads to wait until each one has finished its respective work before moving on. This is particularly useful in situations where a task is parallelized in several chunks that may not necessarily take the same time to be completed (think as an example about rendering a 3D picture by slicing it in a number of parts and having each part be computed by a separate thread).

So in short, having defined your process model, you’ll have to define how they will communicate with each other, and how threads’ actions will be synchronized.

Major concerns and compromises

You may have noticed that I’ve done my best to stick to generic concepts and put some care in showing that there are several ways to do a given thing. That’s not just for fun. There are several compromises to play with when designing a kernel, and depending on what your OS project’s goals are, you’ll probably have to consider them in different ways.

Here’s an overview of some major concerns which you should have in mind when designing a kernel (though the importance of each varies depending on what your OS project is):

  • Performance: both in terms of hardware resource use and in terms of performance perceived by the user. These are not totally the same. As an example, in a desktop operating system, prioritizing software which the user is directly interacting with over background services improves perceived performance without requiring much optimization work. In a real-time operating system, hardware resource usage does not matter as much as meeting deadlines. And so on…

  • Maintainability: Kernels are pretty hard to code, therefore you generally want them to last a fairly long time. For this to happen, it must be easy to grab their code and tinker with it as soon as a flaw is found or a feature is to be added. the codebase must thus be kept as short as possible, well-commented, well-documented, well-organized, and leave room for tweaking things without breaking the API.

  • Portability: In our world of quickly-evolving hardware, operating system kernels should be easily portable to a new architecture. This is especially true in the realm of embedded devices, where the hardware is much more complex and ever-changing than it is on the desktop.

  • Scalability: the other side of the portability coin is that your kernel should adapt itself to future hardware evolutions in a given architecture. This noticeably implies optimizing for multi-core chips: you should strive to restrict the parts of the kernel which can only be accessed by a limited number of threads to a minimal amount of code, and aggressively optimize your kernel for multi-threaded use.

  • Reliability: Processes should not crash. but when they do crash, the impact of their crash should be minimized, and recoverability should be maximized. This is where maximizing process isolation, reducing the amount of code in loosely-isolated processes, and investigating means of backing up process data and restarting even the most critical services without rebooting the OS really shines.

  • Security: On platforms which allow untrusted third-party software to run, there should be some protection against malware. You must understand right away that things like open-source, antiviruses, firewalls, and having software validated by a bunch of testers, are simply neither enough nor very efficient. These should only be tools for paranoids and fallback methods when system security has failed, and system security should fail as infrequently as possible. Maximal isolation of processes is a way to reach that result, but you must also minimize the probability that system component can be exploited by low-privilege code.

  • Modularity: You’ll generally want to make kernel components as independent from each other as possible. aside from improving maintainability, and even reliability if you reach a level of modularity where you can restart failing kernel components on the fly without having the live system take a big hit, it also permits you to make some kernel features optional, a very nice feature, especially when applied to hardware drivers in kernels which include them.

  • Developer goodies: In the dark ages of DOS, it was considered okay to ask from developers that they literally code hardware drivers in their software, as the operating system would do nearly nothing for them. This is not the case anymore. For everything which you claim to support, you must provide nice and simple abstractions which hide the underlying complexity of hardware behind a friendly, universal interface.

  • Cool factor: Who’ll use a new kernel if it’s just the same as others, but in a much superior way? Let’s introduce power-efficient scheduling, rickrolling panic screens, colored and formatted log output, and other fun and unique stuff!

Now, let’s see how they end up in conflict with each other…

  • The quest for performance conflicts with everything but scalability when taken too far (writing everything in assembly, not using function calls, putting everything in the kernel for better speed, keeping the level of abstraction minimal…)

  • Maintainability conflicts with scalability, along with anything else that makes the codebase more complex, especially if the extra complexity can’t be put in separate modules.

  • Portability is in conflict with everything that requires using or giving access to architecture-specific features, particularly when arch-specific code ends up being spread all over the place and not tightly packed in a known corner (as in some forms of performance optimizations).

  • Scalability is in conflict with any feature or construct which can’t be used on 65536 CPU cores at the same time. aside from the obvious compromise with maintainability and reliability which are better without hard-to-code and hard-to-debug threads spread all over the place, there’s also a balance with some developer goodies (an obvious example being the libc and its hundreds of blocking system calls).

  • Reliability is the fiend of anything which adds complexity, as more code statistically means more bugs, especially when said code is hard to debug. the conflict with performance is particularly big, as many performance optimizations require to provide code with more access to hardware than it actually requires. It is also the sole design criteria in this list to have the awesome property of conflicting with itself, as some system features can improve reliability.

  • Security is a cousin of reliability as far as code complexity is concerned, since bugs can be exploitable. It also doesn’t like low-level code where every single action is not checked (pointers arithmetic, C-style arrays…), which is more prone to exploitable failures than the rest.

  • Modularity doesn’t like chunks of code which must be put at the same place of RAM. This means a serious conflict with performance, since code locality allows optimal use of CPU caches. the relationship between modularity and maintainability is ambiguous: separating system components from each other initially helps maintainability a lot, but extreme forms of modularity like putting the scheduler (part of the kernel which manages multitasking) in a process can make the code quite confusing.

  • We’ve previously seen that developer goodies and other cool stuff conflict with a large part of the rest for a number of reasons. Notice also an extra side of the feature vs maintainability conflict: it’s easy to add features, but hard to remove them, and you don’t know in advance how useful they will be. If you’re not careful, this results in the phenomenon of feature bloat where the number of useless features grows exponentially with time. A way to avoid this is to keep the feature set minimal in the first release, then examine user feedback to see what is actually lacking. but beware of the “second-system effect”, where you just put everything you’re asked for in the second release, resulting in even worse feature bloat than if you had put a more extensive feature set to start with.

Some examples of kernel design

There are many operating system kernels in existence, though not all meet the same level of success. here are a few stereotypical designs which tend to be quite frequently encountered (this list is by no mean exhaustive):

Monolithic kernels

The way all OS kernels were written long ago, for performance reasons, and still the dominant kernel design as of today. the monolithic kernel model remains quite attractive due to the extreme simplicity of its design. Basically, the kernel is a single big process running with maximum privileges and tending to include everything but the kitchen sink. As an example, it is common for desktop monolithic kernels to include facilities for rendering GPU-accelerated graphics and managing every single filesystem in existence.

Monolithic kernels shine especially in areas where high performance is needed, as everything is part of the same process. they are also easier to design, since the hardware resource model can be made simpler (only the kernel has direct access to hardware, user-space processes only have access to kernel-crafted abstractions), and since user-space is not a major concern until late in the development process. On the other hand, this way of doing things highers the temptation of using bad coding practices, resulting in unmaintainable, non-portable, non modular code. Due to the large codebase and the full access to hardware, bugs in a monolithic kernel are also more frequent and have a larger impact than in more isolated kernel designs.

Examples of monolithic kernel include Linux and its Android fork, most BSDs’ kernels, Windows NT and XNU (Yes, I know, the two latter call themselves hybrid kernels, but that’s mostly marketing. If you put most services in the same address space, with full access to the hardware, and without any form of isolation between each other, the result is still a monolithic kernel, with the advantages and drawbacks of monolithic kernels).


This is the exact opposite of a monolithic kernel in terms of isolation. the part of the kernel which has full access to the hardware is kept minimal (a few thousands of lines of executable code in the case of MINIX 3, to be compared with the millions of lines of code of monolithic kernels like Linux or Windows NT), and most kernel services are moved in separate services whose access to hardware is fine-tuned for their specific purpose.

Microkernels are highly modular by their very nature, and the isolated design favors good coding practices. Process isolation and fine-tuned access to hardware resources also ensure optimal reliability and security. On the other hand, microkernels are harder to write as much as they are easier to maintain, and the need to constantly switch from one process to another makes the most straightforward implementation perform quite poorly: it takes more optimization work to have a microkernel reach high performance, especially on the IPC side (as IPC becomes a critical mechanism).

Examples of commercial-grade microkernels include QNX, µ-velOSity and PikeOS. On the research side, one can mention MINIX 3, GNU Hurd, the L4 family, and the EROS family (KeyKOS, EROS, Coyotos, CapROS).

VM-based kernels

A fairly recent approach, which at the time has not fully gotten out of research labs and proof-of-concept demos. maybe you’ll be the one implementing it successfully. the idea here is that since most bugs and exploits in software come from textbook mistakes with native code (buffer overflows, dangling pointers, memory leaks…), native code is evil and should be phased out. the challenge is thus to code a whole operating system, including its kernel, in a managed language like C# or Java.

Benefits of this approach include obviously a very high cool factor and increased reliability and security. It could also potentially reach better performance than microkernels while providing similar isolation in a distant future, by isolating processes through a purely software mechanism (since all pointers and accesses to the hardware are checked by the virtual machine, no process may access resources which it’s not allowed to access). On the other hand, nothing is free in the world of kernel development, and VM-based kernels have several major drawbacks to compensate for these high promises.

  • The kernel must include a full featured interpreter for the relevant language, which means that the codebase will be huge, hard to design, and that very nasty VM bugs are to be expected during implementation.

  • Making a VM fast enough that it is suitable for running kernel-level code full of hardware access and pointer manipulation is another big programming challenge.

  • A VM running on top of bare hardware will be harder to write, and thus more buggy and exploitable, than a VM running in the user space of an operating system. At the same time, exploits will have even worse consequences. currently, the Java Virtual Machine is one of the biggest sources of vulnerabilities in existence on desktop PCs, so clearly something must change in the way we write VMs before they are suitable for inclusion in operating system kernels.

Examples of active VM-based kernel projects include Singularity, JNode, PhantomOS and Cosmos. There are also some interesting projects that are not active anymore, like JX and SharpOS (whose developers are now working in the MOSA project).

Bibliography, links, and friends

Having defined what the general concepts which you’ll have to care about are, I bet you want to get into more details. In fact you should. So here is some material for going deeper than this introductory article on the subject of kernel design:

  • Modern Operating Systems (Andrew S. Tanenbaum): should you only read one book on the subject, I strongly recommend this one. It is an enlightening, extensive description of the subject, covering a lot of aspects of kernel design and which you may also use in much more parts of your OS development work.

  • You may also find a list of other books, along with some reviews, on the OSdev wiki.

  • While you’re on said wiki, you might also want to have a look at its main page, more precisely at the “Design Considerations” links in the left column (scroll down a bit). Globally, you should bookmark this website, because you’ll have a vital need for it once you start working on implementation. It’s, simply put, the best resource I know of on the subject.

  • And when you have question, also consider asking them in their forum. they are being asked hundreds of “how do I?” and “I’m stuck, what should I do?” implementation questions per month, so a bit of theoretical discussions would really please them. but beware of stupid questions which are answered in the wiki, otherwise prepare to face Combuster’s sharp tongue.

  • Questions for an OS designer is also an interesting read, although it doesn’t go too deeply into specifics. I should have linked to it in my previous article.

And that’s all for now. Next time, we’re going to go a bit more platform-specific, as I’m going to describe basic characteristics of the x86 architecture, which will be used for the rest of this tutorial (I’ll noticeably explain why).

Tagged , , , , ,

Portable App Encrypt Stick Adds Secure Browser

1298282416 44 Portable App Encrypt Stick Adds Secure Browser

Version 5 of Encrypt Stick remains one of the most secure and least intrusive ways to store and encrypt sensitive data. It installs to and runs off of your USB thumb drive, and leaving no footprint on your (or others’) PC or Mac. It also uses polymorphic encryption (the algorithm will change for each device it runs from) that the company claims is 10 times faster than 256-bit AES, and provides a virtual keyboard to prevent key-logging programs from stealing your password. Encrypt Stick is available in a full $40 version and a Free version, which is basically a demo of the full version.

Encrypt Stick’s interface is clean and classy.ENC Security Systems has addressed every minor complaint I had about the previous version. It’s now readily apparent that the program runs from your flash drive, and the interface is nigh-on flawless. Aside from fixes, version 5 of the paid version adds a secure Web browser that launches from within the Encrypt Stick interface. The browser rendered the limited number of sites I visited just fine, but trying to watch videos on YouTube was a frustrating, stuttering experience. but YouTube is not what you use a secure browser for anyway. you use Encrypt Stick’s browser to prevent malware attacks, and it does this nicely by preventing third-parties from installing any kind of software, including plug-ins.

My only issue with Encrypt Stick is the same one I have with all software-based encryption–speed, or lack thereof compared to a hardware-based secure drive. That’s more than made up for by the cost differential. you may use Encrypt Stick on as many drives as you want while hardware-encrypted flash drives are expensive.

The free version of Encrypt Stick is limited to 20MB of storage and one group of passwords, etc.–and it restricts use of the Web browser to 30 days–but it’s still quite useful. a detailed feature comparison with the $40 full version can be found on the company’s Web site. All-in-all, Encrypt Stick is a most worthy program and much improved since my last look. Highly recommended if you need to secure your data on a flash drive.

Tagged , , ,

Technology content trusted by users in North America and around the world

1298109616 56 Technology content trusted by users in North America and around the world

New WinZip

Tagged , , ,
Page 1 of 41234