Showing posts with label Article. Show all posts
Showing posts with label Article. Show all posts

Sunday, 21 December 2014

Cloud Technology

By Unknown | At 9:50:00 am | Label : , | 0 Comments

Cloud computing

Cloud Technology
 is computing in which large groups of remote servers are networked to allow centralized data storage and online access to computer services or resources. Clouds can be classified as public, private or hybrid.

The criticisms about it are mainly focused on its social implications. This happens when the owner of the remote servers is a person or organisation other than the user, as their interests may point in different directions, for example, the user may wish that his or her information is kept private, but the owner of the remote servers may want to take advantage of it for their own business.

Overview

Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.

Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users. For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America's business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.

The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as one uses it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of on infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.

The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to a growth in cloud computing.

Cloud vendors are experiencing growth rates of 50% per annum.

History

Origin of the term

Cloud computing icon.svg
The origin of the term cloud computing is unclear. The expression cloud is commonly used in science to describe a large agglomeration of objects that visually appear from a distance as a cloud and describes any set of things whose details are not inspected further in a given context.

In analogy to above usage the word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics and later to depict the Internet in computer network diagrams. With this simplification, the implication is that the specifics of how the end points of a network are connected are not relevant for the purposes of understanding the diagram. The cloud symbol was used to represent the Internet as early as 1994, in which servers were then shown connected to, but external to, the cloud.

References to cloud computing in its modern sense appeared early as 1996, with the earliest known mention in a Compaq internal document.

The popularization of the term can be traced to 2006 when Amazon.com introduced the Elastic Compute Cloud.

The 1950s

The underlying concept of cloud computing dates to the 1950s, when large-scale mainframe computers were seen as the future of computing, and became available in academia and corporations, accessible via thin clients/terminal computers, often referred to as "static terminals", because they were used for communications but had no internal processing capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing. During the mid 70s, time-sharing was popularly known as RJE (Remote Job Entry); this nomenclature was mostly associated with large vendors such as IBM and DEC.

The 1990s

In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extends this boundary to cover all servers as well as the network infrastructure.

As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing. They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end users.

Since 2000

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting in a real-time cloud environment. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offer cloud-computing services running on standard hardware. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform.

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet. Among the various components of the Smarter Computing foundation, cloud computing is a critical piece.

On June 7, 2012, Oracle announced the Oracle Cloud. While aspects of the Oracle Cloud are still in development, this cloud offering is posed to be the first to provide users with access to an integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers.

Similar concepts


Cloud computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and helps the users focus on their core business instead of being impeded by IT obstacles.

The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.

Users routinely face difficult business problems. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.

Cloud computing also leverages concepts from utility computing to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery.

Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.

  • Grid computing — "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
  • Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.
  • Utility computing — The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
  • Peer-to-peer — A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).

Wednesday, 17 December 2014

NFC Technology

By Unknown | At 10:50:00 am | Label : , | 0 Comments

Near field communication (NFC)

NFC Technologyis a set of standards for smartphones and similar devices to establish radio communication with each other by touching them together or bringing them into proximity, typically a distance of 10 cm (3.9 in) or less.

As with proximity card technology, near-field communication uses electromagnetic induction between two loop antennas located within each other's near field, effectively forming an air-core transformer. It operates within the globally available and unlicensed radio frequency ISM band of 13.56 MHz on ISO/IEC 18000-3 air interface and at rates ranging from 106 kbit/s to 424 kbit/s. NFC involves an initiator and a target; the initiator actively generates an RF field that can power a passive target, an unpowered chip called a "tag".This enables NFC targets to take very simple form factors such as tags, stickers, key fobs, or cards that do not require batteries. NFC peer-to-peer communication is possible, provided both devices are powered. NFC tags contain data (currently between 96 and 4,096 bytes of memory) and are typically read-only, but may be rewriteable. The tags can securely store personal data such as debit and credit card information, loyalty program data, PINs and networking contacts, among other information. They can be custom-encoded by their manufacturers or use the specifications provided by the NFC Forum, an industry association with more than 160 members founded in 2004 by Nokia, Philips Semiconductors (became NXP Semiconductors since 2006) and Sony charged with promoting the technology and setting key standards. The NFC Forum defines four types of tags that provide different communication speeds and capabilities in terms of configurability, memory, security, data retention and write endurance. The Forum also promotes NFC and certifies device compliance and if it fits the criteria for being considered a personal area network.

NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards including ISO/IEC 14443 and FeliCa.The standards include ISO/IEC 18092 and those defined by the NFC Forum. In addition to the NFC Forum, the GSMA has also worked to define a platform for the deployment of "GSMA NFC Standards". within mobile handsets. GSMA's efforts include Trusted Services Manager, Single Wire Protocol, testing and certification, secure element.

A patent licensing program for NFC is currently under deployment by France Brevets, a patent fund created in 2011. The program under development by Via Licensing Corporation, an independent subsidiary of Dolby Laboratories, terminated in May 2012. A public, platform-independent NFC library is released under the free GNU Lesser General Public License by the name libnfc.

Present and anticipated applications include contactless transactions, data exchange, and simplified setup of more complex communications such as Wi-Fi.

Wednesday, 19 November 2014

Software

By Unknown | At 3:24:00 am | Label : , | 0 Comments
Software
Computer software, or simply software is any set of machine readable instructions that directs a computer processor to perform specific operations. Computer software contrasts with computer hardware, which is the physical component of computers. Computer hardware and software require each other and neither can be realistically used without the other.

Computer software includes computer programs, libraries and their associated documentation. The word software is also sometimes used in a more narrow sense, meaning application software only. Software is stored in computer memory and cannot be touched i.e. it is intangible.

At the lowest level, executable code consists of machine language instructions specific to an individual processor typically a central processing unit (CPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location inside the computer – an effect that is not directly observable to the user. An instruction may also (indirectly) cause something to appear on a display of the computer system a state change which should be visible to the user. The processor carries out the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or interrupted.

Software written in a machine language is known as "machine code". However, in practice, software is usually written in high level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. High level languages are translated, using compilation or interpretation or a combination of the two, into machine language. Software may also be written in a low level assembly language, essentially, a vaguely mnemonic representation of a machine language using a natural language alphabet. Assembly language is translated into machine code using an assembler.

Based on the goal, computer software can be divided into:

    Application software, which uses the computer system to perform special functions or provide entertainment functions beyond the basic operation of the computer itself. There are many different types of application software, because the range of tasks that can be performed with a modern computer is so large  see list of software.
    System software, which is designed to directly operate the computer hardware, to provide basic functionality needed by users and other software, and to provide a platform for running application software. System software includes:
        Operating systems, which are essential collections of software that manage resources and provides common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has an operating system.
        Device drivers, which operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver.
        Utilities, which are computer programs designed to assist users in maintenance and care of their computers.
    Malicious software or malware, which are computer programs developed to harm and disrupt computers. As such, malware is undesirable. Malware is closely associated with computer related crimes, though some malicious programs may have been designed as practical jokes.


    Desktop applications such as web browsers and Microsoft Office, as well as smartphone and tablet applications (called "apps"). (There is a push in some parts of the software industry to merge desktop applications with mobile apps, to some extent. Windows 8, and later Ubuntu Touch, tried to allow the same style of application user interface to be used on desktops and laptops, mobile devices, and hybrid tablets.)
    JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin.
    Server software, including:
        Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java or ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser.
    Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function;
    Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed, application which is always ran).
    Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it.

Users often see things differently from programmers. People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.

    Platform software: Platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software.
    Application software: Application software or Applications are what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications.
    User written software: End user development tailors systems to meet users' specific needs. User software include spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co workers.

Computer software has to be "loaded" into the computer storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation – moving data, carrying out a computation, or altering the control flow of instructions.

Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high speed data access in the CPU. Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.

Software quality is very important, especially for commercial and system software like Microsoft Office, Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person work, crash the computer and do other unexpected things. Faults and errors are called "bugs." Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs. Many bugs are discovered and eliminated (debugged) through software testing. However, software testing rarely – if ever – eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). All major software companies, such as Microsoft, Novell and Sun Microsystems, have their own software testing departments with the specific goal of just testing. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA has extremely rigorous software testing procedures for many operating systems and communication functions. Many NASA based operations interact and identify each other through command programs called software. This enables many people who work at NASA to check and evaluate functional systems overall. Programs containing command software enable hardware engineering and system operations to function much easier together.

The software license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.

Proprietary software can be divided into two types:

    freeware, which includes the historical category shareware. As the name suggests, freeware can be used for free, although in the case of shareware, this is sometimes only true for a limited period of time. However, the term shareware has fallen out of use, as the original name "shareware" was coined in a pre internet age, and even larger, well established software companies such as Microsoft commonly offer free trial versions of some or all of their software.
    software available for a fee, often inaccurately termed "commercial software", which can only be legally used on purchase of a license.

Open source software, on the other hand, comes with a free software license, granting the recipient the rights to modify and redistribute the software.

Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either  the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid  although since all useful software has effects on the physical world, this requirement may be open to debate.

Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers  for example the patent for Aspect Oriented Programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the US, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.

Design and implementation of software varies depending on the complexity of the software. For instance, design and creation of Microsoft Word software will take much more time than designing and developing Microsoft Notepad because of the difference in functionalities in each one.

Software is usually designed and created (coded/written/programmed) in integrated development environments (IDE) like Eclipse, Emacs and Microsoft Visual Studio that can simplify the process and compile the program. As noted in different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) are categorized for different purposes. For instance, JavaBeans library is used for designing enterprise applications, Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. Underlying computer programming concepts like quicksort, hash table, array, and binary tree can be useful to creating software. When a program is designed, it relies on the API. For instance, if a user is designing a Microsoft Windows desktop application, he/she might use the .NET Windows Forms library to design the desktop application and call its APIs like Form1.Close() and Form1.Show() to close or open the application and write the additional operations him/herself that it need to have. Without these APIs, the programmer needs to write these APIs him/herself. Companies like Sun Microsystems, Novell, and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.

Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.

A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning.
A great variety of software companies and programmers in the world comprise a software industry. Software can be quite a profitable industry: Bill Gates, the founder of Microsoft was the richest person in the world in 2009 largely due to his ownership of a significant number of shares in Microsoft, the company responsible for Microsoft Windows and Microsoft Office software products.

Non profit software organizations include the Free Software Foundation, GNU Project and Mozilla Foundation. Software standard organizations like the W3C, IETF develop software standards so that most software can interoperate through standards such as XML, HTML and HTTP.

Other well known large software companies include Oracle, Novell, SAP, Symantec, Adobe Systems, and Corel, while small companies often provide innovation.

Monday, 17 November 2014

Melinda Gates

By Unknown | At 1:17:00 pm | Label : , | 0 Comments

Melinda French Gates

Melinda Gates(nƩe Melinda Ann French; August 15, 1964) is an American businesswoman and philanthropist. She is the co founder of the Bill & Melinda Gates Foundation and the wife of Bill Gates. She worked at Microsoft, where she was project manager for Microsoft Bob, Microsoft Encarta and Expedia.
Gates was born in 1964 in Dallas, Texas. She was the second of four children born to Raymond Joseph French Jr., an engineer, and Elaine Agnes Amerland, a homemaker. She has an older sister and two younger brothers. Gates, a Roman Catholic, attended St. Monica Catholic School, where she was the top student in her class year. She graduated as valedictorian from Ursuline Academy of Dallas in 1982. Gates earned a bachelor's degree in computer science and economics from Duke University in 1986 and an MBA from Duke's Fuqua School of Business in 1987. She was a member of the Kappa Alpha Theta sorority, Beta Rho Chapter, at Duke University.

Shortly thereafter, she joined Microsoft and participated in the development of many of Microsoft multimedia products including Publisher, Microsoft Bob, Encarta, and Expedia.

She met Bill Gates while working at Microsoft. In 1994, she married Bill Gates in a private ceremony held in Lanai, Hawaii. Shortly thereafter, she left Microsoft to focus on starting and raising her family. Her last position was Microsoft General Manager of Information Products. Melinda and Bill Gates have three children: daughters Jennifer Katharine Gates (born 1996) and Phoebe Adele Gates (born 2002), and son Rory John Gates (born 1999). The family resides in a large mansion on the shore of Lake Washington.

Gates served as a member of Duke University board of trustees from 1996 to 2003. Gates attends Bilderberg Group conferences and holds a seat on the board of directors of the Washington Post company. She retired from the board of Drugstore.com in August 2006 to spend more time working for the Bill & Melinda Gates Foundation.[13]
As of 2014, Melinda and Bill Gates have donated more than US$30 billion to the Foundation.
In 2002, Melinda and Bill Gates received the Award for Greatest Public Service Benefiting the Disadvantaged, an award given out annually by Jefferson Awards.

In December 2005, Melinda and Bill Gates were named by Time as Persons of the Year alongside Bono. Melinda and Bill Gates received the Spanish Prince of Asturias Award for International Cooperation on May 4, 2006 in recognition of their world impact through charitable giving. In November 2006, Melinda and Bill Gates were awarded the Order of the Aztec Eagle for their philanthropic work around the world in the areas of health and education, particularly in Mexico, and specifically in the program "Un paĆ­s de lectores".

In May 2006, she was honored for her work to improve the lives of children locally and around the world with the naming of the Melinda French Gates Ambulatory Care building, at Seattle Children (then called Children Hospital and Regional Medical Center). She also chaired The Campaign for Children, a $300 million comprehensive fundraising campaign to expand facilities, fund under-compensated and uncompensated care and grow the hospital research program to find cures and treatments.

On June 12, 2009, Melinda and Bill Gates received honorary degrees from the University of Cambridge. Their benefaction of $210 million in 2000 set up the Gates Cambridge Trust, which funds postgraduate scholars from outside the UK to study at the University.

In 2013, Gates was awarded an honorary Doctor of Humane Letters by Duke University as a tribute for her philanthropic commitment. She was also ranked #3 in Forbes 2013 and 2014 lists of the 100 Most Powerful Women, #4 in 2012 and #6 in 2011. And Armchair Advocates added Gates to the list: "100 Tweeters of Social Good You Have to Follow in 2013."

Gates was appointed an honorary Dame of the British Empire in 2013 for her services to philanthropy and international development.

Gates has also donated over $10 million to her high school Ursuline Academy of Dallas. She is one of the major donors of their Facing the Future Campaign and was honored in their dedication ceremony on May 7, 2010.

Sunday, 16 November 2014

Laptop Overheating Solutions

By Unknown | At 1:43:00 pm | Label : , | 0 Comments

Computer (Laptop) Cooling Basics

Laptop Overheating
The cooling of the CPU, otherwise referred to as The Chip or to laymen The Brain of the laptop is a dilemma that most manufacturers have to face when designing a laptop enclosure and choosing the correct CPU for it. The cooling is normally performed by a fan and some kind of metal conductor like copper or aluminium called a heat sink. The CPU, and lately the GPU, are connected to the metal heat sink via a thermal grease or compound. This grease conducts heat but not electricity. The trick for manufacturers is to get rid of as much heat as possible using as small a fan and heat sink as the CPU will allow. Vents are also cut into the casing allowing the fan to suck cool air from the bottom, force it over the heat sink and blow it out the side or rear thus cooling the CPU and GPU. In more modern times copper is being used as the conducting metal, liquid is pumped through the system and radiators and exhaust ports are used just like in motor vehicles. All this to get rid of the heat and make the system run faster. 

The problem is that over time dust and other particles clog the vents, fan and exhaust port or radiator of the system thus restricting air flow and cooling. This is fixed relatively easily by blowing out the vents and fan with air or using a brush or earbud to clean away the dust. Remember: In the computer world DUST DESTROYS! There is however another hidden problem that occurs when computers heat up or overheat. They tend to dry out the thermal compound that conducts the heat thus causing the system to overheat more quickly. Luckily most CPU, GPU and chip manufacturers have built in protection for this. They step down the operating speed bit by bit until they eventually switch off the CPU and thus the system shuts down. So if you have a computer system that starts working slower and slower and then switches off for no apparent reason, overheating could be your problem.

To solve the overheating problem, especially in laptops, I am going to show you how to get to the cooling unit, dust it out, replace the thermal grease and put everything together again. In order to demonstrate this I will be using a friend LG F1 Pro Express Dual laptop that started exhibiting just such symptoms. It would become sluggish and then suddenly switch off for no reason. This caused him a lot of lost work and a corrupted Outlook PST email file. Here I will show you step by step the solution to this nasty problem.  

First Step

Rather than buying a replacement laptop, there are a few simple steps you can take in order to give it a second wind. The following guide will walk you through two methods for cleaning your laptop’s cooling system. One method is extremely simple and non-invasive, and the other is a complete cleaning, best suited for those with some computer hardware experience.

Second Step
  • A Small Screwdriver for Opening the Back

  • A Compressed Air Duster

  • A Clean and Organised Workspace

  • An Anti-Static Wrist Strap (optional but highly recommended)


Starting off with a basic cleaning only requires a few common tools. Just about any electronics store, such as Fry or Best Buy, will have all of these tools (minus the workspace of course).

Third Step

Believe it or not, this is one of the harder steps, simply because laptops come in all sorts of different models. Some are easy to get into, others have hidden screws and latches that could require specialised tools. In most cases, like with this Dell Inspiron, opening the back panel is relatively simple.

First you’ll want to shut down your laptop, unplug it, and remove the battery. If you’re using an anti-static wrist strap, now is the time to put it on and attach it to a ground. If your laptop looks as straight forward as this Dell, odds are it is, but keep in mind that opening it up should not require much prying at all. If it doesn’t seem to want to open, check again for other screws (sometimes located on the sides or even around the keyboard).

Some laptops may prove too difficult to get into. If this is the case, you can still make use of the canned air by blowing it into the various fan vents in attempts to clear out some of the gunk. This is generally sufficient as a temporary fix.

Fourth Step

Now that you have your laptop open, locating the heatsink should be fairly simple. Modern laptops make use of “heat piping,” copper tubes that channel heat to the heatsink. You should be able to follow these tubes (some laptops have more than one) from the CPU and GPU to the fins of the heatsink.

Using the canned air, blow steady controlled bursts into the heatsink and surrounding area. It’s OK to use your fingers to remove any larger chunks of lint that don’t come out with the air. If your laptop allows you access to the fan as well, you can use the canned air to clean it as well. For the fan though, you’ll want to use quick bursts of air. You don’t want to spin the fan too much or too fast.

If you’re uninterested in the more advanced cleaning directions, feel free to skip ahead to Step Eight.

Fifth Step

-A Small Screwdriver for Opening the Back

-A Compressed Air Duster

-Denatured Alcohol (or 100% pure Isopropyl)

-A Lint Free Cloth

-A Smooth Plastic Edge (e.g. an old credit card)

-A Small Tube of Thermal Paste (such as Arctic Silver 5)

-A Clean and Organised Workspace

-An Anti-Static Wrist Strap (optional but highly recommended)

The more advanced cleaning will involve removing the heatsink and reapplying new thermal paste. This will completely revamp your laptop’s cooling system, possibly even making it better than new (depending on the quality of thermal paste the manufacturer used).

Also, an important note: be sure to use either denatured alcohol (available at most hardware stores) or 100% pure Isopropyl. Non-pure isopropyl alcohols contain various minerals or water that can ruin your laptop components.

Sixth Step

Picking up from step two, it’s time to remove the heatsink from the CPU and GPU. In most cases, the CPU and GPU heatsinks will be connected via heat piping, so you’ll have to remove the screws on both before pulling it off.

When lifting off the heatsink, be sure not to bump the base (the portion that contacts the CPU and GPU) against anything. These surfaces are usually designed with microgrooves to help improve contacting surface area. Any nicks or scratches can reduce the cooling efficiency.

Some laptops will have the fan attached to the heatsink. For these configurations, you’ll generally have to disconnect a small power cable running from the fan to the motherboard. Make sure to note where the connectors are so that you can reattach it when you’re finished.

Seventh Step

Now that the heatsink is off, you’ll want to give it a good once over with the canned air. Try to get the fins as clear of obstructions as you can. Once it’s cleared of dust and lint, it’s time to clean off all of the old thermal paste.

This is where that plastic edge comes in. Using it, you’ll want to gently scrape off as much of the old thermal paste as possible. Once finished with the plastic edge, it’s time to make use of the denatured alcohol and lint free cloth.

Start by dampening the cloth slightly with the denatured alcohol. You want it damp enough to wipe the heatsink clean, but not wet enough to drip. It will take a bit of work, but you should ideally have virtually all traces of the previous thermal paste removed.

Once clean, make sure you don’t touch the contact surface of the heatsink. Even slight contaminates, like fingerprints, can greatly reduce the cooling performance of the heatsink.

Eighth Step

Just like the heatsink, the CPU and GPU need to be cleaned. This is accomplished in the same fashion as cleaning the heatsink, albeit slightly more delicately. Starting with the plastic edge, gently swipe away the old thermal paste. You’ll be relying mostly on the lint free cloth and alcohol for cleaning the CPU and GPU, so you only need to use the plastic edge to clean off the bulk of the old paste.

After scraping away most of the old paste, give it a quick burst of canned air to get rid of any scrapings left behind. Then, using the lint free cloth dampened slightly with the alcohol, gently wipe away the remaining thermal paste. Once clean, the tops of the chips should have a near mirror finish.

Ninth Step

Just about every different type of processing chip has a different method for applying thermal paste, but the methods for applying thermal paste onto laptop chips are fairly universal. Start with a single small dot of paste in the middle of the chip. Then, using the plastic edge (clean it off first) or the tip of the new thermal paste tube, spread it evenly over the rest of the chip. You’ll end up with a thin layer of thermal paste about the same thickness of a piece of paper.

Even though it’s tempting to cake on the thermal paste, it’s actually best to use as little as possible while still covering the whole chip. High-end thermal pastes are designed to work optimally when only thick enough to fill the various microgrooves on the face of the chip and heatsink while remaining just microns thick otherwise.

Tenth Step

Reattaching a heatsink is pretty straight forward; however, there are a few details to keep in mind. It’s best to carefully align the heatsink first, before seating it into place. This reduces the chances of smearing the thermal paste. If you bump or lift the heatsink back off of the chips after making contact, you’ll have to go through the thermal paste removal and application process all over again.

Once the heatsink is seated, give it a little wiggle laterally to help spread the thermal paste into all of the microgrooves. After that, it’s time to retighten the screws. Most laptops are designed so that the screws can only be tightened so far, but you’ll still want to be careful not to use too much pressure while tightening them. Be sure to tighten the screws in a crossing pattern, going over each screw several times to insure that they are all tight.

Eleventh Step

Now it’s time to close up the system. Give everything a good burst of canned air to get rid of any remaining dust and lint. Reattach any cables or connectors that you may have bumped or had to disconnect. Also make sure that all of the wires are tucked away so that they won’t be crimped when you attached the back plate again.

From here out, it’s pretty much just the reverse of opening it all up in the first place. Some systems will have you push the back plate straight down to have it snap into place, while others might require you to start with a corner and hinge it shut. If you’re not sure which way yours closes, just be gentle and try different orientations. As long as you don’t force it, you’ll get it in place eventually.

With everything put back into place, you’re ready to start using your laptop once again. You should see considerable improvements in cooling immediately, with continued improvements over the next several days as the thermal paste settles in.

Twelfth Step

Any time we’re using our laptops, they’re sucking in dust and lint, so it’s the least we can do to try and be conscious of how and when we’re using out laptops. For starters, you’ll probably notice that your laptop has vents on the bottom for pulling in air (about 95% of laptops do). If you use your laptop on your lap or some other cloth surface, it not only blocks these vents, reducing airflow, but it also introduces a lot more lint.

It’s also very helpful to turn your laptop off (or at least put it to sleep) when you aren’t using it. Just turning your laptop off at night can reduce the amount of dust it takes in by 33%, compared to leaving it on at all times.

Lastly, and it may seem cruel, pets are one of the leading causes of gunk buildup in laptops. Now, this doesn’t mean you have to get rid of your pets. Just try to keep them away from your laptop, and avoid using your laptop on or near places your pet sleeps.

Friday, 14 November 2014

RAID 0

By Unknown | At 3:51:00 am | Label : , | 0 Comments

The standard RAID levels

Diagram of a RAID 0 setupare a basic set of RAID configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from general purpose computer hard disk drives. The most common types today are RAID 0 (striping)

A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped), without parity information and with speed as the intended goal. RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones.

A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 240 GB (120 GB × 2).

Size = 2 . min ( 120 GB, 320 GB)
         = 2 . 120 GB
         = 240 GB
The diagram shows how the data is distributed into Ax stripes to the disks. Accessing the stripes in the order A1, A2, A3, ... provides the illusion of a larger and faster drive. Once the stripe size is defined on creation it needs to be maintained at all times.

Performance
RAID 0 is also used in areas where performance is desired and data integrity is not very important, for example in some computer gaming systems. Although some real world tests with computer games showed a minimal performance gain when using RAID 0, albeit with some desktop applications benefiting, another article examined these claims and concluded: Striping does not always increase performance (in certain situations it will actually be slower than a non RAID setup), but in most situations it will yield a significant improvement in performance.

Wednesday, 12 November 2014

Steve Ballmer

By Unknown | At 6:09:00 pm | Label : , | 0 Comments

Steve Ballmer

Picture of Steve Ballmer
(born March 24, 1956) is an American businessman who was the CEO of Microsoft from January 2000 to February 2014, and is the current owner of the Los Angeles Clippers. As of 2014, his personal wealth is estimated at US$20.7 billion, ranking number 32 on the Forbes 400. It was announced on August 23, 2013, that he would step down as Microsoft's CEO within 12 months. On February 4, 2014, Ballmer retired as CEO and was succeeded by Satya Nadella, Ballmer resigned from the Board of Directors on August 19, 2014 to prepare for teaching a new class and for the start of the NBA season.
On May 29, 2014, Ballmer placed a bid of $2 billion to purchase the Los Angeles Clippers of the National Basketball Association (NBA). He officially became the Clippers owner on August 12, 2014.

Ballmer was born in Detroit, the son of Beatrice Dworkin and Frederic Henry Ballmer, a manager at the Ford Motor Company. His father was a Swiss immigrant, and his mother was Jewish (her family was from Belarus). Ballmer grew up in the affluent community of Farmington Hills, Michigan. In 1973, he attended college prep and engineering classes at Lawrence Technological University. He graduated from Detroit Country Day School, a private college preparatory school in Beverly Hills, Michigan, with a perfect score of 800 on the mathematical section of the SAT and was a National Merit Scholar. He now sits on the school's board of directors. In 1977, he graduated magna cum laude from Harvard College with an A.B. in applied mathematics and economics.

At college, Ballmer was a manager for the football team, worked on The Harvard Crimson newspaper as well as the Harvard Advocate, and lived down the hall from fellow sophomore Bill Gates. He scored highly in the prestigious William Lowell Putnam Mathematical Competition, an exam sponsored by the Mathematical Association of America, scoring higher than Bill Gates. He then worked for two years as an assistant product manager at Procter & Gamble, where he shared an office with Jeffrey R. Immelt, who later became CEO of General Electric. In 1980, he dropped out of the Stanford Graduate School of Business to join Microsoft.

Steve Ballmer joined Microsoft on June 11, 1980, and became Microsoft's 30th employee, the first business manager hired by Gates.

Ballmer was initially offered a salary of $50,000 as well as a percentage of ownership of the company. When Microsoft was incorporated in 1981, Ballmer owned 8 percent of the company. In 2003, Ballmer sold 8.3% of his shareholdings, leaving him with a 4% stake in the company. The same year, Ballmer replaced Microsoft employee stock options program.

In the 20 years following his hire, Ballmer headed several Microsoft divisions, including operations, operating systems development, and sales and support. From February 1992 onwards, he was Executive Vice President, Sales and Support. Ballmer led Microsoft's development of the .NET Framework. Ballmer was then promoted to President of Microsoft, a title that he held from July 1998 to February 2001, making him the de facto number two in the company to the Chairman and CEO, Bill Gates.

In January 2000, Ballmer was officially named Chief Executive Officer. As CEO, Ballmer handled company finances and daily operations, but Gates remained chairman of the board and still retained control of the technological vision as chief software architect. Gates relinquished day to day activities when he stepped down as chief software architect in 2006, while staying on as chairman, and that gave Ballmer the autonomy needed to make major management changes at Microsoft.

When Ballmer took over as CEO, the company was fighting an antitrust lawsuit brought on by the U.S. government and 20 states, plus class action lawsuits and complaints from rival companies. While it was said that Gates would have continued fighting the suit, Ballmer made it his priority to settle these saying Being the object of a lawsuit, effectively, or a complaint from your government is a very awkward, uncomfortable position to be in. It just has all downside. People assume if the government brought a complaint that there really a problem, and your ability to say we are a good, proper, moral place is tough. It actually tough, even though you feel that way about yourselves.

Upon becoming CEO, Ballmer required detailed business justification in order to approve of new products, rather than allowing hundreds of products that sounded potentially interesting or trendy. In 2005, he recruited B. Kevin Turner from Wal Mart Stores, where he was executive vice president, to become Microsoft chief operating officer to add scorecards for measuring customer satisfaction and other key sales metrics.

Since Bill Gates retirement, Ballmer oversaw a dramatic shift away from the company PC first heritage, replacing most major division heads in order to break down the talent hoarding fiefdoms, and Businessweek said that the company arguably now has the best product lineup in its history. Ballmer was instrumental in driving Microsoft cloud computing strategy, with acquisitions such as Skype.

Under Ballmer tenure as CEO, Microsoft annual revenue surged from $25 billion to $70 billion, while its net income increased 215 percent to $23 billion, and its gross profit of 75 cents on every dollar in sales is double that of Google or International Business Machines Corp. In terms of leading the company total annual profit growth, Ballmer tenure at Microsoft (16.4 percent) surpassed the performances of other well-known CEOs such as General Electric Jack Welch (11.2 percent) and IBM Louis V. Gerstner, Jr. (2 percent). These gains came from the existing Windows and Office franchises, with Ballmer maintaining their profitability, fending off threats from cheaper competitors such as GNU/Linux and other open-source operating systems and Google Docs. Ballmer also built half-a-dozen new businesses such as the data centers division ($6.6 billion in profit for 2011)[citation needed] and the Xbox entertainment and devices division ($8.9 billion) (which has prevented the Sony PlayStation and other gaming consoles from undermining Windows), and oversaw the acquisition of Skype. Ballmer also constructed the company $20 billion Enterprise Business, consisting of new products and services such as Exchange, Windows Server, SQL Server, SharePoint, System Center, and Dynamics CRM, each of which initially faced an uphill battle for acceptance but have emerged as leading or dominant in each category. This diversified product mix helped to offset the company reliance on PCs and mobile computing devices as the company entered the Post PC era; in reporting quarterly results during April 2013, while Windows Phone 8 and Windows 8 had not managed to increase their market share above single digits, the company increased its profit 19 percent over the previous quarter in 2012, as the Microsoft Business Division (including Office 365) and Server and Tools division (cloud services) are each larger than the Windows division.

Ballmer attracted criticism for failing to capitalize on several new consumer technologies, forcing Microsoft to play catch-up in the areas of tablet computing, smartphones and music players with mixed results. Under Ballmer watch, In many cases, Microsoft latched onto technologies like smartphones, touchscreens, smart cars and wristwatches that read sports scores aloud long before Apple or Google did. But it repeatedly killed promising projects if they threatened its cash cows Windows and Office. Microsoft share price stagnated during Ballmer's tenure. As a result, in May 2012, hedge fund manager David Einhorn called on Ballmer to step down as CEO of Microsoft. His continued presence is the biggest overhang on Microsoft stock, Einhorn said in reference to Ballmer. In a May 2012 column in Forbes magazine, Adam Hartung described Ballmer as the worst CEO of a large publicly traded American company, saying he had steered Microsoft out of some of the fastest growing and most lucrative tech markets (mobile music, handsets and tablets).

In 2009, and for the first time since Bill Gates resigned from day to day management at Microsoft, Ballmer delivered the opening keynote at CES.

On June 19, 2012, Ballmer revealed Microsoft new tablet device called Microsoft Surface at an event held in Hollywood, Los Angeles.

On August 23, 2013, Microsoft announced that Ballmer would retire within the next 12 months. A special committee that included Bill Gates would decide on the next CEO.

There was a list of potential successors to Ballmer as Microsoft CEO, but all had departed the company: Jim Allchin, Brad Silverberg, Paul Maritz, Nathan Myhrvold, Greg Maffei, Pete Higgins, Jeff Raikes, J. Allard, Robbie Bach, Bill Veghte, Ray Ozzie, Bob Muglia and Steven Sinofsky. B. Kevin Turner, Microsoft Chief Operating Officer (COO), was considered by some to be a de facto number two to Ballmer, with Turner having a strong grasp of business and operations but lacking technological vision. On February 4, 2014, Satya Nadella succeeded Ballmer as CEO.

Ballmer has also served as director of Accenture Ltd. and a general partner of Accenture SCA since October 2001.

TechCrunch

By Unknown | At 5:50:00 pm | Label : , | 0 Comments

TechCrunch

Logo
is a news website focused on information technology companies, ranging in size from start up to established NASDAQ-100 firms. It was founded by Michael Arrington in 2005. On September 28, 2010, at its TechCrunch Disrupt conference in San Francisco, AOL announced that it would acquire TechCrunch. The transaction was rumored to be between $25m and $40m.[6]

In 2011, the site came under fire for possible ethics violations. These included claims that Arrington investments in certain firms which the site had covered created a conflict of interest. The controversy that ensued eventually led to Arrington departure, and other writers, including Paul Carr and Sarah Lacy, followed suit.
TechCrunch Disrupt is an annual conference hosted by TechCrunch in San Francisco, New York City, and Beijing, which began in 2011 and is where some technology start up launch their products and services competing on stage in front of venture capital potential investors, media and other interested parties for prize money and publicity. Past winners include Qwiki, Getaround, and Enigma.io
A scandal erupted over the Titstare application, created by participants in a hackathon at Disrupt 2013.

In 2014, TechCrunch Disrupt was featured in an arc of the HBO series Silicon Valley. The characters' startup Pied Piper participates on a start up battle at TechCrunch Disrupt. According to TechCrunch editor Sam O'Keefe, the show representation of the conference was obscenely accurate.

TechCrunch operates CrunchBase, a database of companies and start up, which comprises around 500,000 data points profiling companies, people, funds, fundings and events. The company claims to have more than 50,000 active contributors. Members of the public, subject to registration, can make submissions to the database; however, all changes are subject to review by a moderator before being accepted. Data is constantly reviewed by editors to ensure it is up to date. CrunchBase says it has 2 million users accessing its database each month.
AOL is in dispute with start up Pro Populi over that group use of the entire CrunchBase dataset with apps it has developed, one of which is known as People+. Pro Populi is being represented by the Electronic Frontier Foundation.

Advance Technology

By Unknown | At 5:13:00 pm | Label : , | 0 Comments

Advanced Technology

Advance Technology
is a one of a kind BSc programme, taught only at the University of Twente. It is a broad technical Bachelor programme that is finely tuned to society needs. Its multidisciplinary approach combines different engineering and natural science disciplines. If you would like to discover more about this unique English taught programme, read about Advanced Technology at the University of Twente, the study programme or be a student for a day and experience for yourself what the programme and student life at Twente are all about.
Advanced Technology is an English taught technical programme with a keen eye for society needs. Its multidisciplinary approach brings together a range of engineering and natural science disciplines, giving you the scope to come up with innovative and unexpected solutions to new problems without being confined to a single area of science. You will learn how to combine knowledge from electrical engineering, chemical engineering, applied physics, mathematics, mechanical engineering and business administration.

In this varied and fascinating programme, you will experience a range of completely different settings and will draw on all the theoretical knowledge you have acquired in a wide range of different disciplines to successfully tackle a series of challenging projects. You will soon discover the immense value of examining a problem from different perspectives and experience the fulfilment of producing a multidisciplinary solution. The programme also develops your awareness of how society can gain the greatest benefit from the solutions you devise.

An internationally recognized Bachelor degree in Advanced Technology is an excellent preparation for a number of different Master degrees at the University of Twente or other universities in the Netherlands or abroad.

If you choose to study Advanced Technology at the University of Twente, you are opting for high uality, project led education in an international environment and a personal and informal atmosphere. Do you want to know more about Advanced Technology? Read more about the programme at the University of Twente.
Do you have an inquiring mind and a strong desire to explore and understand the technical aspects of everything that happening in your world? Are you looking for a degree that will give you the ability to cope with almost any technological challenge? Then Advanced Technology is the programme for you! Discover why you should study Advanced Technology at the University of Twente. Check out the eligibility criteria or contact us for more information.

Computer Technology

By Unknown | At 5:00:00 pm | Label : , | 0 Comments

A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically

ComputerSince a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

In World War II, mechanical analog computers were used for specialized military applications. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as computers. However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.

Computer Technology first use of the word “computer” was recorded in 1613 in a book called “The yong mans gleanings” by English writer Richard Braithwait I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number. Computer Technology referred to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.

Rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were invented in the 17th century. Computer Technology first recorded use of the word "computer" is also from the 17th century, applied to human computers, people who performed calculations, often as employment. The first computer devices were conceived of in the 19th century, and only emerged in their modern form in the 1940s.

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the father of the computer, he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Tuesday, 11 November 2014

Information Technology

By Unknown | At 3:56:00 pm | Label : , | 0 Comments

Information technology (IT)

IT is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.

The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services.

Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.

Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) and electronic (1940–present). This article focuses on the most recent period (electronic), which began in about 1940.
Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.

Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. Colossus, developed during the Second World War to decrypt German messages was the first electronic digital computer. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognisably modern electronic digital stored-program computer was the Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on 21 June 1948.

The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison the first transistorised computer, developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.

Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from the Second World War, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube, but the information stored in it and delay line memory was volatile in that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer.

IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007 almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.

Database management systems emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. One of the earliest such systems was IBM's Information Management System (IMS), which is still widely deployed more than 40 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows and columns. The first commercially available relational database management system (RDBMS) was available from Oracle in 1980.

All database management systems consist of a number of components that together allow the data they store to be accessed simultaneously by many users while maintaining its integrity. A characteristic of all databases is that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.

The extensible markup language (XML) has become a popular format for data representation in recent years. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort". As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine and human-readable.

Business Management

By Unknown | At 3:33:00 pm | Label : , | 0 Comments

Management in business and organisations

Business Managementis the function that coordinates the efforts of people to accomplish goals and objectives using available resources efficiently and effectively. Management comprises planning, organising, staffing, leading or directing, and controlling an organisation to accomplish the goal. Resourcing encompasses the deployment and manipulation of human resources, financial resources, technological resources, and natural resources. Management is also an academic discipline, a social science whose objective is to study social organisations.

Management involves identifying the mission, objective, procedures, rules and the manipulation of the human capital of an enterprise to contribute to the success of the enterprise. This implies effective communication: an enterprise environment (as opposed to a physical or mechanical mechanism), implies human motivation and implies some sort of successful progress or system outcome. As such, management is not the manipulation of a mechanism (machine or automated program), not the herding of animals, and can occur in both a legal as well as illegal enterprise or environment. Management does not need to be seen from enterprise point of view alone, because management is an essential function to improve ones life and relationships. Management is there everywhere and it has a wider range of application. Based on this, management must have humans, communication, and a positive enterprise endeavour. Plans, measurements, motivational psychological tools, goals, and economic measures (profit, etc.) may or may not be necessary components for there to be management. At first, one views management functionally, such as measuring quantity, adjusting plans, meeting goals. This applies even in situations where planning does not take place. From this perspective, Henri Fayol (1841–1925) considers management to consist of six functions:
  • Forecasting
  • Planning
  • Organising
  • Commanding
  • Coordinating
  • Controlling

Henri Fayol was one of the most influential contributors to modern concepts of management.

In another way of thinking, Mary Parker Follett (1868–1933), defined management as the art of getting things done through people. She described management as philosophy.

Critics, however, find this definition useful but far too narrow. The phrase "management is what managers do" occurs widely, suggesting the difficulty of defining management, the shifting nature of definitions and the connection of managerial practises with the existence of a managerial cadre or class.

One habit of thought regards management as equivalent to "business administration" and thus excludes management in places outside commerce, as for example in charities and in the public sector. More broadly,every organisation must manage its work, people, processes, technology, etc. to maximise effectiveness. Nonetheless, many people refer to university departments that teach management as "business schools". Some institutions (such as the Harvard Business School) use that name while others (such as the Yale School of Management) employ the more inclusive term "management".

English speakers may also use the term "management" or "the management" as a collective word describing the managers of an organisation, for example of a corporation. Historically this use of the term often contrasted with the term "Labor" - referring to those being managed.

But in the present era management's use is identified in the wide areas and its frontiers have been pushed to a broader range. Apart from profitable organisations even non-profitable organisations (NGO) apply management concepts. The concept and its uses are not constrained. Management on the whole is the process of planning, organising, staffing, leading and controlling.

Management operates through five basic functions: planning, organising, coordinating, commanding, and controlling.
  • Planning: Deciding what needs to happen in the future and generating plans for action(deciding in advance).
  • Organising: Making sure the human and nonhuman resources are put into place
  • Coordinating: Creating a structure through which an organisation's goals can be accomplished.
  • Commanding: Determining what must be done in a situation and getting people to do it.
  • Controlling: Checking progress against plans.


Posting Lama ►
 

Pembaca

Followers

Copyright © 2015. Master Technology - All Rights Reserved My Template by Yossh