Despite predictions that NT’s rise would ensure Unix’s demise, last year showed that co-existence is the more likely near-term fate for both operating systems. “I do not foresee Unix going by the wayside. NT doesn’t have the robustness,” said Susan Whitney, general manager of worldwide systems sales, IBM, Armonk, N.Y.
Unix vendors, most notably Sun Microsystems Inc., responded to NT’s popularity with lower-cost servers that competed with Intel-based systems directly and with multiprocessing products whose performance was out of the range of NT’s growing market share.
“It’s a broad-based approach from being down in the low end all the way up to machines that the PC world can’t even conceive of yet,” said Anil Gadre, vice president of corporate marketing for Sun, Mountain View, Calif.
There were winners and losers on both the NT and Unix sides of the midrange server market, defined as servers with two to 30 processors.
Tandem Computers Inc., Cupertino, Calif., which in 1997 tied its fate more tightly to NT, made the most impressive leap in sales, growing 1,800 percent to $304 million, excluding its host-based systems.
Unix servers remain strong.
Not all vendors fared so well under the NT banner. Compaq Computer Corp., Houston, appreciated Tandem’s feat enough to buy the company in June, but saw its own midrange share fall 27 percent. Dayton, Ohio-based NCR Corp. focused on the high end of the NT market only to witness a 12 percent drop in sales last year.
Unix purist Sun achieved 12 percent sales growth for its loyalty to the operating system. But Sun could not keep up with Hewlett-Packard Co.’s 20 percent increase to $1.8 billion, which in turn was dwarfed by IBM’s results.
Not all went smoothly for IBM, however. Some resellers complained about the trickle of RS/6000s that flowed from the company’s newly combined manufacturing facility in Rochester, Minn.
The slip-up might have offered HP an opening, but the vendor was contending with its own problems. Sun began arguing to resellers that HP’s plan to combine NT and Unix sales forces showed divided loyalties.
Sun, meanwhile, deepened both its commitment to Unix and its criticism of NT. “You get this funny message, and you have to reboot the [NT] server, and it takes awhile to come back up. That is just an unbelievable annoyance,” Sun’s Gadre said.
Sun’s argument held some truth, in the view of some analysts. “One thing that was disappointing to us was that Wolfpack [NT clusters] was so delayed in being implemented,” said Jerry Sheridan, senior analyst for Dataquest Inc., San Jose.
Tandem and others appeared to take share away from Compaq. As the No. 1 maker of Intel-based servers, Compaq committed to NT, but delivering eight-way SMP turned out to be harder to implement than the company thought. Compaq “was on a slow path to the enterprise. By acquiring Tandem, they purchased capability,” Sheridan said.
Inventory and pricing problems also hobbled Compaq. But with its Tandem acquisition and pending Digital merger, Compaq could emerge as a midrange powerhouse. One thing is certain: The playing field will look a little different this time next year.
If you are having problems with your drives, this means that the digital asset archive of your array needs a raid 5 recovery. This can help solve your problem before it gets worse.
RAID is the acronym for Redundant Array of Inexpensive or Independent Disks. It currently has eleven levels but the most popular ones are RAID 1, RAID 5 and RAID 10. Each level provides a unique velocity and so-called fault tolerance. Its volume combines at least three hard drives to configure data which can be later written into three other drives. It is easily accessed by configuring it beforehand to supply information on a combination of data mirroring, data striping and data redundancy. Data mirroring helps increase the overall safety of data. Data striping on the other hand works by separating data in a certain logical order. Lastly, software for data redundancy is used to execute data in at least two tables.
Raid 5 software used to be an exclusive and expensive service that is used only by applications in high-end servers. Nowadays, it is slowly being integrated into motherboards during mass production with the help of affordable drives like PATA and SATA. There are actually two types of this product. The first is the “software” RAID which is found in the current Windows and Linux version. The “hardware” RAID on the other hand has a distinct controller that you can manipulate to sustain needed information to use for the computer’s hard drive. Raid 5 Recovery also controls the writing of data and contact to its volume function.
If you got caught in a situation when your RAID 5 just wouldn’t work, meaning as an example that the Dell Linux RAID server you are using could no longer recognize the data stored, what would you do? Lose the data the same old way by reformatting the disks or discover a way to RAID 5 recovery? If you have the never say die attitude, I’m pretty sure you would use the latter approach. But how will you do it?
To recover the files safely, you need to capture the data in every storage block individually, the manual approach. This task requires the skill of an expert as an ordinary individual with no basic knowledge in programming will not be able to perform the task. On that note, you need to determine the RAID 5 configuration before the successful Windows or Linux RAID recovery. The questions you need to ask: how many disks are we talking about? What’s the disk sequence? What are the array parameters? Etc. Imagine it this way, if you’re reading a book all these steps are only the first 3 chapters. Getting complicated? You can try to do it with the help of software.
With software, it’s going to be plug and play. Download and install the software, start RAID 5 then allow the scan progress to complete. Just a tip, not all RAID 5 recovery software is capable of recovering all the disks missing. Others can only recover one disk so you need to check this out.
One of the challenges that a lot of consumers and business owners are facing is data loss. It can cause a lot of problems that can affect the growth of your business in a major way. Investing in a business is the best thing to do and you need to take good care of your business. One of the corrective measures that you need to put in place is having the address of a Mac hard drive data repair service. They can help with recovering data and also preventing your entire Apple computer system from virus. They can usually work with all Mac file systems and recover data from a drive, no matter its Mac OS version.
When you lose your important data, you don’t have to worry. Mac hard drive recovery experts can recover all your lost information. No one is perfect and life is about uncertainties. There is always increased chances of software virus, which is the major cause of data loss. If you approach an expert, he or she will help you in taking care of your important information. They have the experience and know how to apply different strategies to recover your data. For software user interface, you can approach a software support expert who will assist you in case you have a problem. You can also search for more information concerning hard disk recovery software and services online.
The sense of head-spinning change that you get as an IS manager is an offshoot of a fairly simple phenomenon: the restoration of competition in the computer industry. Two centers of vitality are vying for leadership in advancing the state of the art.
One is the Microsoft/Intel camp and its large following of third-party developers. During the past 10 years, this group has toppled many cherished precepts and emerged as the dominant source of innovation.
The other is the Unix community (more precisely, the Unix/Java community), led by Sun, Oracle and Netscape. This camp is closer to the Internet. In the past year, it generated its own army of third-party followers, partly because of a generous infusion of venture capital. The Unix/Java camp now is challenging the PC leaders with its own rapid innovation.
My confidence of renewed competition isn’t because the Unix/Java camp has united behind Unix or because it threatens to displace Microsoft at the desktop. Neither statement is true. But it is true that Unix advocates have assimilated the lessons of the PC revolution and are putting them to use.
For example, the Unix/Java crew has finally learned that appropriate technology available today is more important than the best technology promised for the future. Software is never finished. Chips never run as fast as they might. Getting things out the door when they’re needed is where the action is.
For years, the Unix camp was proud that it had better technology than Microsoft, and it priced its products accordingly. But the PC revolution has taught that technology has to be appropriate to the present needs of customers and priced competitively. Sun learned that lesson and reduced Solaris prices to Windows NT levels.
Rather than try to wring early profits from Java, Sun sought market share, a strategy Microsoft inflicted on its Windows competitors long ago. That may explain the churlish tone Microsoft officials take when they explain their “me-too” stance on Java it’s never fun to watch your own tactics work against you.
Another measure of Microsoft’s predicament is the migration of development talent toward Java. Microsoft used to command so much mind share that developers marched in lockstep to the Redmond beat. No more.
In addition to developer talent, Sun has gained the high ground on standards. It can say to Microsoft: “Don’t tell me about standards. I’m too busy creating new ones for you to follow.” When Microsoft talks about pushing into the enterprise with distributed systems, it confronts a well-developed CORBA standard with a new protocol, Internet InterOrb Protocol, for moving distributed objects across the Internet. Microsoft has to play catch-up again, this time by getting HP to propose the interface between CORBA and Microsoft’s Component Object Model.
On a series of fronts (from messaging to directory services to network application development), Microsoft is struggling to match the best of a new generation of products. If it were confronting a single competitor, it might still be showing its old form. But it’s confronting a coalition of powers Sun, IBM, Netscape, Novell, Borland, Lotus and so on each determined to adapt lessons learned during the PC revolution.
Resurgent competition seems to have resulted in proliferating technologies, but in fact they are clustering about the Wintel and Unix/Java camps. There is more noise, but there is also more background integration work being done inside each camp. Savvy information systems managers should pick and choose; they should use what’s best for them and avoid potential lock-ins if zealots in either camp take off in their own proprietary direction.
The choice isn’t always clear, but IS managers will have a valuable, long-term impact if they reward vendors that produce compatible technologies and shun vendors’ more isolating, competitive initiatives.
When I first started managing UNIX systems, I underestimated the amount of time I would spend managing disk space. Traditional UNIX operating systems let you divide disks into 8 or 16 partitions (also called slices). When a slice becomes full, you must either move data to another slice or repartition the disk to make that slice larger. Repartitioning is time-consuming because you must back up the disk, repartition it by modifying the disk label, and restore data as necessary.
A typical UNIX disk analyzer.
Logical-volume technology provides the ability to reserve disk space that can be added to slices as needed. You can increase the size of a slice on the fly without losing data or experiencing significant downtime.
Overview Of Logical Volumes
The examples shown are for an HP-UX 10.20 system. The concepts also apply to other systems, such as AIX, that support logical volumes.
A disk can be divided into logical volumes (analogous to physical partitions). However, a logical volume can span more than one physical disk. Logical volumes are grouped into logical volume groups. Logical volumes cannot span more than one logical volume group.
The first step to using logical volumes is to create a volume group. Next, disks are added to the logical volume group and the logical volumes are created. It is important to leave some disk space unallocated so it can be added to logical volumes if they became full.
Unallocated space cannot be allocated across volume groups.
Using The Logical Volume Manager
On HP-UX, the set of commands used to manage logical volumes and volume groups is referred to as the Logical Volume Manager (LVM). The first step to using LVM is creating a volume group. Traditionally, volume-group names on HP-UX 10.20 are/dev/vg00,/dev/vg01, and the like. You can use any name you wish–more meaningful names (/der/development,/dev/data, and so forth) might make it simpler to administer. The steps to create the volume group are
1. Create the physical volume with pvcreate. If the disk has previously been configured for LVM, the -f option is needed to overwrite the old LVM configuration.
# pvcreate -f /dev/rdsk/c0t4d0
Physical volume “/dev/rdsk/c0t4d0″ has been successfully created.
This example uses the disk installed with SCSI target ID 4. If you are not sure of the SCSI ID of your device, use the ioscan command to display the devices installed on your HP-UX system.
2. Create the volume-group directory and group file:
# mkdir /dev/data
# mknod /dev/data/group c 64 0xNN0000
where N N is a hexadecimal number unique to this volume group. I suggest starting with 00 and working up.
3. Create the volume group:
# vgcreate /dev/data /dev/dsk/c0t4d0
Volume group “/dev/data” has been successfully created. Volume Group configuration for /dev/data has been saved in /etc/lvmconf/data.conf
4. Create the logical volume:
# l vcreate /dev/data
Logical volume “/dev/data/lvoll” has been successfully created with character device “/dev/data/rlvoll”. Volume Group configuration for /dev/data has been saved in /etc/lvmconf/data.conf.
Each time you execute lvcreate, it creates a new logical volume named/dev/data/lvolN and automatically increments N. A specific name can be specified with the -n option and the size in megabytes can be specified using the -L option:
# lvcreate -n stocks -L 200 /dev/data
Logical volume “/dev/data/stocks” has been successfully created with character device “/der/data/rstocks”, Logical volume “/dev/data/stocks” has been successfully extended. Volume Group configuration for /dev/data has been saved in /etc/lvmconf/data.conf
The default size of a logical volume is 0. You can increase the size of a logical volume with the lvextend command:
lvextend -L 300 /dev/data/stocks
This command increases the size of the logical volume to 300MB. When a volume group is created, a physical extent size is specified. The default value is 4MB. All logical volumes must be specified in whole physical extents. The LVM commands will round up to the nearest physical extend, so you normally do not need to worry about this.
Now you are ready to use the logical volumes for file-system space, swap space, or as raw disk space for a database. The device names you use for commands like newfs are/dev/data/raccounting,/dev/data/rlvoll, and so forth:
newfs -F hfs /dev/data/rstocks
mount -F hfs /dev/data/stocks /mnt
hfs stands for high-performance file system and is the standard UNIX file system on HP-UX systems.
Increasing File-System Size
On HP-UX 10.20, you increase the size of a file system residing on a logical volume by first unmounting the file system:
If the file system is busy, you may need to bring the system to single-user mode with init S before you can unmount it. Alternatively, use the fuser command to display which processes have open files on the file system and kill those processes. Some file systems (such as/opt and/usr) will always have open files if you are running VUE or CDE. To extend these file systems, you must be in single-user mode.
Increase the size of the logical volume:
lvextend -L 500 /dev/data/stocks
Remember that 500 represents the new size of the logical volume in megabytes.
Increase the file-system capacity to match that of the logical volume:
Remount the file system using the mount command.
The HP-UX System Administration Manager (SAM) can perform most LVM tasks. However, it cannot
* extend a logical volume to a specific disk (this is useful for controlling the performance of your disks and ensuring the fastest disks are being used appropriately)
* create a root volume group and a root logical volume (for hosting the root file system)
* increase the size of active file systems (you may have to bring the system to single-user mode and use the HP-UX commands)
There are HP-UX commands to perform these tasks. Please refer to your LaserROM documentation for more details.
The Journaled File System
Many vendors support both LVM and a journaled file system (JFS). A journaled file system provides faster system recovery and improves data reliability. It is important not to confuse LVM and JFS because they can be used together or separately. JFS is an alternative to a high-performance file system on HP-UX.
By combining zippy hardware with its new CDexecutive 1.1 software, Logicraft Information Systems Inc. has produced a turnkey CD ROM library system that integrates seamlessly into a NetWare bindery-based environment via Windows NT’s File and Print Services for NetWare.
Such integration comes at a price: The top-of-the-line, 12-drive CDexecutive/XF model 5300 system we tested costs $20,890. We could assemble a comparable unit for considerably less money, but we’d need lots of time to do so.
Given users’ increasing demands for instantaneous access to CD ROM-based information, overburdened network managers tired of swapping, sharing, and mounting CD ROM volumes may not mind shelling out for this very scalable, plug-and-play optical retrieval solution. The product began shipping Oct. 31.
The number of reference CD ROM titles appears to be growing exponentially, creating a strong demand for server-based clusters of CD ROM devices within a workgroup. The currently popular automated jukebox approach of swapping out one CD for another takes approximately 3 to 12 seconds per exchange and doesn’t really lend itself to multiuser access.
Leapfrogging that process, the CDexecutive/XF 5300 system implements intelligent software that dynamically mounts as many as 56 scalable CD ROM drives as network-shared volumes; the software also automatically associates each CD ROM title with predefined a ccess permissions, run-time scripts, and usage tracking facilities, further turbocharging the process.
Giving the platters a spin
PC Week Labs tested a CDexecutive/XF 5300 optical-retrieval tower system configured with a 75MHz Pentium CPU, 32M bytes of RAM, a 4.1G-byte hard disk, and 12 Plextor Inc. PX43-CS quad-speed CD ROM drives using two Adaptec Inc. AHA-3940A Fast SCSI host adapters.
In terms of software, the test system came with Windows NT 3.51, File and Print Services for NetWare, Windows NT Services for Macintosh, and CDexecutive 1.1 software installed.
It was nearly child’s play to then connect the system to our 16-node test LAN with a PCI (Peripheral Component Interconnect) bus-mastered 3Com Corp. EtherLink III adapter; all we needed to do was set a handful of network-related parameters.
Technology gets better, we swear!
We managed our CDexecutive/XF system using each of the three applications that constitute the CDexecutive software suite. CDcatalog allowed us to interact with the CD database, while CDstatus let us view the status of all 12 drives. With CDscript, we were able to create and maintain client run-time commands, as well as distribute access to the extensive data-storage capacity provided by 12 online CD ROM reference titles.
This suite of server-based CD ROM utilities–though not integrated but launched from a common front end–provided an easily accessible method of maintaining and managing our CD titles. Nonetheless, better integration between these utilities would improve the overall look and feel of the CDexecutive software, which is the real highlight of the CDexecutive/XF system.
Crafting a catalog
Not only did the CDexecutive 1.1 software perform the magic of maintaining an accurate catalog of each of the 12 packed CDs we mounted, but it also automatically shared each CD as a Windows NT volume and simultaneously mounted that CD as a NetWare volume when we inserted it into a vacant drive.
Using CDcatalog, the process of creating a catalog of a new CD title was simple. First, we inserted a new CD into any of the drives and clicked the Autosense button. After a moment of accessing the CD, specific fields in the CD database, such as volume label and serial number, were automatically filled in, helping us to verify which CD we put into the drive.
We concluded cataloging by completing other CD-specific information, including an extended CD title, publisher name, and release date fields.
We were able to limit the number of active users able to access a CD to help keep the license police away, as well as disable the eject button common to all CD ROM drives. It’s pretty annoying when someone ejects a CD while your computer is attempting to access it.
Pushing the limits
The software’s unique and speedy approach notwithstanding, there are hardware limitations to how quickly data can be delivered from a CD ROM drive.
On our test LAN, we calculated a 50 percent decrease in access time when we doubled (from eight active clients to 16) the number of clients accessing a single CD title. However, we think this performance hit is a result of reaching the maximum data throughput limit of today’s best CD ROM drives.
One solution would be to use duplicate CD titles in additional drives and have clients configured to access different resources. What is funny is that now, multiple drive RAID arrays have made this CD technology somewhat irrelevant, as aside from minor hard drive problems from time-to-time, they are basically flawless. As long as you have a hard drive recovery service in your contacts, you’re good to go!
Although CDexecutive/XF does not, some products circumvent this hardware bottleneck by providing the ability to distribute, on the fly, the data demand among multiple drives containing the same title.
To test CDexecutive’s ability to play in the NetWare environment, we added it to our NetWare-only LAN. Once we had configured the backbone network address using the Windows NT Control Panel, our CDexecutive took on many of the personality traits of our NetWare 3.12 server.
We logged in to the Windows NT File and Print Services for NetWare and found all active CD ROM drives were immediately available as mounted NetWare volumes. We verified the integration into our NetWare LAN using Novell’s Virtual Loadable Module client and Client32 for Windows to access each volume from DOS and Windows 95, respectively.
As a turnkey solution, we believe Logicraft has chosen the right components—a PCI bus architecture with a bus-mastered Ethernet adapter and dual SCSI adapters attached to multiple quad-speed CD ROM drives—on which to pre-install a carefully selected handful of Windows NT-based services.
In fact, the ability of this product to work so effectively in a NetWare environment is largely due to the functionality provided by NT’s File and Print Services for NetWare. Microsoft has done a good job of providing developers with a powerful reason to build products based on Windows NT rather than on NetWare.
When your laptop hard drive crashes and you lose your most important files, you may have to get the help of an external party to get all of that data back. If you are unaware of the hardware operating system on your laptop, it is advisable to not to attempt to recover your data, as it will further damage your laptop hard drive and may professional recovery that much harder. In such a situation, selecting the best laptop hard drive recovery service provider is a question. There are many facts you have to consider prior to making the decision to select a data recovery service provider.
The security of your data is the key factor you have to think of. That depends on the extent of the confidentiality of the data stored in your laptop. The more the security that is needed for your data; the more you need to look into your provider’s focus on security. Look over the track records of the laptop hard drive recovery service providers nearby your area. The other factor you may want to think about is the professionalism of the technical staff of the company. Are they technically qualified to do the data recovery job? If you ignore these points, you may lose your data and you will not be able to get your files recovered at all.
Losing your data with the failure of your laptop hard disk will make you stressed and often panicked. This is actually a serious situation as sometimes your job or business totally depends on the files stored on your laptop computer’s hard drive. In such a situation, you are not probably not in the mood to find out ways and means to recover your files. Further, it will not be worth it to spend money on equipment for a one-time recovery. This is why laptop hard drive recovery companies are here to help you. The right provider will typically have a lot of experience with a wide variety of laptop systems. The only thing you have to do is let them to take the challenge of the recovery of your data. Their objective is to recover the data effectively and hand the data over to you as soon as possible.
For a professional laptop data recovery engineer, it takes only a few minutes to understand the problem and start to recover the files. These engineers typically have a large set of proprietary tools that are necessary for professional recoveries. Whatever the specifications of your hard drive, they will likely be able to find the parts for your laptop drive. Further, the technical approach and amount of time spent on the data recovery depends on the condition of your hard drive. If it is severely damaged, the amount of time spent is high and the technical approach. Thus the cost of hard drive recovery can vary in some cases.
A professional engineer opens a laptop hard drive.
For people on the go, portable computing has become an essential part of the working environment. Many of us face the need for computing power at off-site locations. For most, a conventional Intel-based laptop computer fills the bill. For some UNIX users, however, such systems are at best a compromise, even when augmented by ports of familiar UNIX utilities. For some people, a non-UNIX, Intel-based solution is no solution at all. These users may need a Solaris-running, SPARC-based workstation they can carry with them for development, system administration, or product demonstrations. This is the audience for whom RDI Computer Corp.’s PowerLite portable workstation was developed. RDI positions the system as the performance equivalent of a SPARCstation 5. How close does this 110MHz, seven-pound system come to a real workstation, and is it worth the weight?
As with any portable computer, the PowerLite is essentially self contained. The screen, keyboard, and trackball are all part of the system unit, and the Solaris operating system is pre-installed at the RDI factory. Thus, basic installation is reduced to an inventory of parts, plugging in the power adapter, and powering up the system. Depending on the options that have been ordered, installation can be somewhat more complex, but not by much. For example, external monitors are supported, as are external keyboards and mice. The connectors for such peripherals are labeled clearly on the rear connector panel of the PowerLite and are expLANed well in the accompanying documentation.
Detailed configuration of the system requires some planning. If the PowerLite will always operate in standalone mode, only user IDs and other rudimentary UNIX setups are required. If, however, the system will be part of a network or of different networks at different locations, more thought is required. In addition to the Solaris operating system, the PowerLite’s software includes RDI’s Virtual Workgroup Architecture (VWA). VWA is a set of software tools that allows the system to be moved easily between different environments.
Among the key components of VWA are AutoNET and Join. AutoNET detects what network the PowerLite is attached to and selects an appropriate configuration automatically, thus avoiding a possible system hang. Join, licensed from Competitive Automation (Menlo Park, CA), works with AutoNET, providing Dynamic Host Configuration Protocol (DHCP) services. Depending on the environments the system moves between, installing Join may not be required. For example, in our test environment, the network structure is replicated between our main lab and off-site labs, allowing us to move systems configured with static IP addresses between locations easily without a need for DHCP. For most nonstatic environments, however, RDI includes both DHCP client and server software as part of VWA.
Documentation includes two manuals: a user’s guide and a system reference set; plus the manuals and release notes that come with Solaris. The PowerLite user’s guide provides basic-installation and system-operation information, along with a command reference for the built-in fax/data modem, information on the power-manager software, and boot PROM settings. The information in this booklet is concise, well-written, and easy to understand. It is written for a reasonably knowledgeable end user, not necessarily a system administrator. The system reference set includes more detailed hardware information, a software user’s guide, instructions for Solaris installation, and reference material on the various components of VWA. The three-ring-bound reference set also is well-written and easy to understand and includes adequate illustrations to augment the text. The indexes for both manuals, however, could use additional attention.
Although the concept of expandability may seem incongruous when associated with a laptopsized portable, the PowerLite provides a surprising array of options. Base memory for the system is 32MB but can be expanded to 64MB, 96MB, or 128MB. The unit will accommodate two 2 1/2-inch Fast SCSI-2 hard disks (810MB or 1.2GB), plus a third if the low-profile 3 1/2-inch diskette drive is sacrificed. The diskette drive also may be replaced with one or two Type I or Type II PCMCIA slots or a single Type III adapter.
For users requiring further disk expansion or SBus slots, a Peripheral Expansion Unit (PXU) is available. The PXU provides space for two additional 31/2-inch SCSI hard disks and two standard single-slot SBus cards (or one two-slot card). The PXU is slightly thinner than the main unit but occupies the same footprint and attaches to the bottom of the main unit. Once in position, the PXU and the main unit lock together with latches in the PXU. Signal connections are provided by a 120-pin interface connector that protrudes through a sliding panel on the base of the main unit. A separate AC adapter is provided for the PXU.
We were impressed by the range of connectivity options provided by the PowerLite’s rear panel. These connections include an 8-pin miniDIN connector for an external mouse or keyboard, a standard Centronics-compatible parallel port, an R J-11 phone jack for the built-in fax/data modem, a 36-pin dual serial/AUI connector, a 50-pin Fast SCSI-2 connector, a 10Base-T Ethernet port (R J-45), a 13W3 external video connector, and 8-bit audio-in and audio-out jacks. Unlike many manufacturers, RDI includes both the pigtail adapter cable that splits out the two serial ports and the AUI Ethernet adapter from the combined connector on the back of the system. RDI also includes an adapter plug that converts the 13W3 video connector to a standard 15-pin VGA connector, so a non-Sun external monitor can be used. A VWA control panel, accessed via a pop-up menu under OpenWindows, provides user control for which monitor is active (built-in screen, external monitor, or both). Resolutions of 640×480 and 1,024×768 are supported on both the built-in LCD screen and an external monitor. Using only the external monitor allows a resolution of 1,280xl,024 to be selected.
Operation And Ease Of Use
In operation, the PowerLite functions essentially like any SPARCstation. Although rearranged to fit within the available real estate, the keyboard of the system provides the same 104-key functionality as a Sun-5 keyboard. The spacing of the primary keys is the same as a conventional AT-style keyboard, so only minor adjustments are required of the user when typing. We found the wide wrist-support area in front of the keys to be comfortable, but the small built-in three-button trackball was difficult to adjust to. We suspect that many PowerLite users also will prefer using an external mouse attached to the mouse/keyboard port provided on the back panel.
Although Solaris 2.5 is now the baseload for PowerLites, Solaris 2.4 was current at the time our test system was shipped. Solaris 1.x also is available for users who have not yet made the switch to the SVR4-based version. Until testing of VWA is completed, however, Solaris 2.4 provides a solid operating system for the PowerLite.
According to RDI documentation, the PowerLite is 100% compatible with standard SPARC systems. Thus, RDI indicates that any Solaris software will run on the system. We loaded several Solaris-based applications and found no incompatibilities during our testing. The software bundled with the system also includes Sun’s Wabi 2.0, allowing MS Windows applications to be loaded and run on the machine.
Our test system was loaded with 64MB RAM and two 8l0MB hard disks. Weighing more than seven pounds, this configuration is substantially heavier than most current conventional Intel-based laptop computers. Few such MS Windows systems, however, provide either the disk or the RAM capacity of the PowerLite. None provides SPARC compatibility, and none that we have seen includes a bulk-in Ethernet port. Battery life for the PowerLite is a meager one hour. Considering the power requirements of the contents of the system, however, an hour of battery operation is not unreasonable. Additionally, the PowerLite is designed not as a battery-operated system but rather as a portable system to be powered via its AC adapter. Thus, it may be more appropriate to think of the battery as a one-hour UPS.
Our performance tests of the PowerLite included the SPEC92 benchmarks, SPECint92 and SPECfp92, which test integer and floating-point performance, respectively. As is our custom, we ran these benchmarks from within OpenWindows with all the normal init level 3 system daemons running. Our result for the SPECint92 suite was 73.67, and our SPECfp92 result was 64.18. RDI’s published results for this 110MHz model of the PowerLite are 77.00 for SPECint92 and 65.30 for SPECfp92, presumably run at singleuser mode with no graphical interface overhead, as is the custom of many benchmarking engineers. Although we prefer the more realistic approach of our multiuser, GUI-based runs, our results are well within the tolerances we have seen with other systems and are thus supportive of RDI’s claims.
Considering our GUI-based benchmarking preference, these results are on a par with Sun’s published numbers for their desktop SPARCstation 5 systems, 78.60 for SPECint92 and 65.30 for SPECfp92. The PowerLite’s performance is significantly better than our lab results with the 100MHz R4600PC model of SGI’s Indy that came in at 55.32 for SPECint92 and 43.73 for SPECfp92. For an Intel Pentium comparison, consider our review of the AST Premmia GX P/133. That system, running Solaris 2.5 for Intel, scored 85.45 for SPECint92–about 12% higher than the PowerLite. The 133MHz Pentium system scored a mere 38.87 on SPECfp92, however, far lower than the PowerLite. In this respect, it also is important to note that we were running Solaris 2.5 on the Premmia, which accounts for some of the performance difference. (Operating-system-level functions under Solaris 2.5 are generally faster than the same functions under Solaris 2.4.)
Although we did not run server-oriented benchmarks such as bigdisk (the disk I/O benchmark we often use) we tested the system in various environments that a mobile user might encounter. In doing so, we found the overall system performance of the PowerLite good. FTP transfers from a PC client through the PowerLite to an attached disk subsystem, for example, were as fast as we have seen with other workstations across a 10Base-T network. This would indicate that both the PowerLite’s disk I/O system and its networking circuitry function at expected SPARCstation 5 levels.
How It Rates
Installation of the PowerLite 110 is minimal because of the self-contained nature of the system and because the operating system is factory installed. Further configuration of the system follows standard SPARCstation procedures. DHCP software is included as part of the VWA tools, allowing the PowerLite to be configured for dynamic IP address assignment or with a static IP address. We rate the system as excellent in the installation category.
Documentation for the PowerLite includes both the standard Solaris documentation that accompanies the operating-system package and supplemental documentation specific to the PowerLite. The two PowerLite-specific manuals are well-written and easy to understand. Coverage includes hardware and VWA components and is sufficient for both end users and system administrators. The marginal indexing of the manuals, however, detracts from an otherwise excellent manual set, resulting in a rating of good for documentation.
Expandability of the PowerLite is accomplished with either external peripherals or RDI’s PXU, which attaches to the bottom of the unit. Although the system unit will accommodate two 2 1/2-inch hard disks, the PXU provides both disk expansion and SBus-card-expansion slots. Additionally, the rear connector panel of the system unit supplies all the ports provided on a full-size SPARCstation, plus a connector for an external mouse or keyboard/mouse combination. The external-monitor port is a Sun-standard 13W3 connector, and an adapter is included for use with a conventional VGA monitor. The PowerLite also can be configured with up to 128MB RAM, far exceeding the capacity of most laptops. We rate the PowerLite’s expandability as excellent.
Operation and ease of use both rate excellent for a system of this design. The PowerLite 110 performed flawlessly during our testing and displayed no compatibility problems with third-party software. Although the overall keyboard arrangement has been compressed to fit within the available space, the size and spacing of the home keys is the same as a conventional AT-style keyboard, making normal typing easy. The supplemental keys of a standard Sun-5 keyboard are all included in a logical arrangement. Although the three-button trackball and the LCD screen of the PowerLite are smaller than those of conventional workstations, both represent a good compromise between size and convenience. The inclusion of Wabi 2.0 is an added benefit and is likely to be useful to mobile users, too. Ease of use is further enhanced by VWA components, including Join and AutoNet, which provide DHCP services and automatic network configuration at boot time.
Our performance tests show a SPECint92 score of 73.67 and a SPECfp92 score of 64.18 for the PowerLite 110. Considering differences in test environments, these scores put the PowerLite on a par with Sun’s SPARCstation 5 and substantially higher than our lab results for SGI’s Indy. Although our recent test of the 133MHz Pentium-based AST Premmia GX P/133 gave an edge to the AST on integer performance, the PowerLite was much higher on floating-point operations, as would be expected. Based solely on the comparison to its SPARCstation 5 cousin, however, the PowerLite’s performance is average.
Overall, we found the PowerLite 110 convenient and a pleasure to use. Although it is heavier than many Intel-based laptops, the PowerLite’s full SPARCstation functionality is more than worth the extra weight. That self-contained functionality is further augmented by the ability to plug in external conveniences such as a monitor, a mouse, or a keyboard/mouse combination, providing the user with the best of both stationary and portable worlds. Based on these convenience and performance factors, the PowerLite does, indeed, have uses beyond that of remote software demonstrations. In fact, it may be an ideal platform for multisite system administrators and on-the-road developers of Solaris-based software. Thus, we give the PowerLite 110 an overall rating of excellent for a system of this design.
It has been said that the way we handle things is reflective of our personality. This old principle is also applicable in the way we handle our computer systems at home or in our offices. The more careless we are, the more our computers will experience problems in the days to come. One of the common problems which many computer enthusiasts experience is hard drive crash. This computer failure can be traced to several things. But how can we keep this from happening in our computers?
Creepy hard drive crash!
When we speak of hard drive crash, what comes to mind is the failure of our Mac or Windows systems to respond. Most of us are careless in handling them, which often results to this problem. So, one way to avoid this to happen is to perform consistent hard disk maintenance on them. Routine maintenance is a must so that computer’s hard disk drive lifespan will be prolonged. For most of us who are fond of downloading computer applications, one way of avoiding hard drive crash is to install an antivirus tool on our computers. This antivirus tool is effective in blocking unsafe sites. But let us remember that computers are still machines which have a specific life span. So to be safe, every time you store important files in your computer, generate a backup weekly file. With this, you can retrieve your files despite the occurrence of hard drive errors.
Just like the human brain, hard drive plays a significant role in most computers. It is through the hard drive that we can store files and applications. It also enables game addicts to download online games and have easy access to these stored files for playtime, anytime.
A hard drive crash is a condition wherein the computer hard drive system fails to respond thereby avoiding one to access the unit or open it. Sometimes computer viruses can cause these hard drive failures. Computer viruses also corrupt files stored in your computer units, which later on can cause damage to the computer’s hard drive. In addition, long time use of computers will also cause the hard drive to crash. However, you can do something to avoid this type of hard drive failure. The secret is so simple – just handle your computer with a little bit of care. Handling with care can surely protect your computer and even extend its life span. So, starting from now, treat and handle your computers delicately!
A hard drive crash is one of the serious disasters that may engulf an end user in the world of computing. Prior to employing any diagnostic measure on a crashed hard disk it is quite imperative to have a clear overview of the hardware itself. Unlike several other devices in a computer, a hard disk is not a purely electronic device. The fact that a hard disk is both electrical and mechanical put the device at a higher potential for physical failure.
Being a mechanical device implies that a hard disk is subjected to the damaging effects of wear and tear. This means that a hard drive crash might be predictable if the device is not put under good use. An abrupt power failure striking a disk that is already writing might be a lead to a major crash of the hard drive. Extreme heat on the surface of a running disk might cause sudden electronic circuit board failure leading to a hard drive crash. Jostling or bumping your PC, Mac or laptop while it is in a running mode might also cause a severe crash of the hard disk. A malfunctioned filter or clogging of the air filter intake limits the amount of air circulating in a running hard disk and this might lead to a crash of the device as well. More information on hard disk crashes is here.
Doctors usually rely on symptoms in order to identify whether or not a patient is infected with a particular disease. A similar process is also used by computer technicians and data recovery professionals especially in dealing with hard drive failures. Troubleshooting computer problems requires a step by step process. Knowledge about computer applications and terms is also helpful to ensure a successful hard drive repair. There is no perfect rule to follow when doing computer diagnosis. But at least one way to do is to identify the hard drive failure symptoms. It is interesting to note that these symptoms may not occur simultaneously.
Identifying hard drive failure symptoms is an effective technique used by most computer technicians during troubleshooting. Frankly, the computer system is so complex. Aside from its complexities, its components are also sensitive. That is why before doing some repairs, technicians and experts usually observe and look for probable signs, which cause the failure. Identifying hard drive failure symptoms also saves time and effort both on the part of the computer owner and the technician. Once symptoms are correctly identified, remedies applicable can be done immediately. Lastly, identifying symptoms is not enough. Adequate troubleshooting knowledge and reliable techniques also come into play.
Using a pos software system at your retail store can be a real eye opener, but isn’t without its issues. Trying to get the thing installed can be a pain, of course, especially among technical, accounting, and operations staff. Each inventory item must be uniquely identified, counted, costed, and entered into the system as you switch over, and you must be scrupulous about tracking inventory received and sales.AccuPOS point of sale provides point-of-sale data-capture, inventory- management, and accounts-receivable capabilities for single or multiuser environments. If your computers are not networked, or they’re at more than one location, AccuPOS point of sale can integrate data from stand-alone machines into a central database.
AccuPOS point of sale supports electronic cash drawers, bar-code readers, credit card readers, and receipt or invoice printers; less comprehensive and less expensive versions of AccuPOS point of sale are also available. The Sales Point module is straightforward and easy to use. It provides numerous payment options, and it handles rentals as well as sales. It also makes returns, back orders, and cash payouts simple. It can produce an almost endless variety of sales reports (by item, by salesperson, by vendor, by date, and so on), as well as end-of-day cash-reconciliation worksheets. In-house charges post to the Accounts Receivable (A/R) module, which tracks payments on account, assesses interest charges, and generates customer statements and aging reports.
Though the A/R module doesn’t have lots of bells and whistles, it performs all the basic A/R functions adequately. The Inventory Analyzer module is AccuPOS point of sale’s greatest strength. The system tracks inventory items and suppliers, and generates purchase orders, receiving reports, and a wide array of inventory-management reports. A variety of pricing methods are available, including add-on amounts, percentage of cost, margin percentage, and custom formula. You can set up to seven prices for each item. AccuPOS point of sale is not a complete accounting package, but you can integrate its data into other accounting systems, including MultiLedger and Connected, using AccuPOS point of sale’s export facility or a separate module called Accounting Interface Module (single-user $95, multiuser $135). AccuPOS point of sale’s installation manual is good, but its user manual is not. Though AccuPOS point of sale’s interface is logical and easy to use, the system is complex, and its weak documentation can be frustrating. Overall, AccuPOS point of sale is an easy-to-use, comprehensive point-of-sale system from which you can derive valuable information. There are few point-of-sale needs that it can’t meet.
Round the corner of a newly opened exhibit at the Smithsonian Institution, in Washington, D.C., and you’ll see a photograph of a young man named John V. Atanasoff. Below it is a much smaller photo of John Mauchly.
For decades it would have been the other way around-if you saw Atanasoff’s picture at all. Atanasoff invented the modern computer, but it took a 1973 court ruling and years of campaigning by his family and others to gain him recognition as a founding father of the information age.
Atanasoff’s rise plays like a Hollywood script. It’s the tale of a ray of inspiration, fueled with a little bourbon in an Illinois bar, that created the computer; a liberal dose of misplaced trust; and millions in lost royalties.
It’s also the tale of a genius inventor whose brainstorm eventually worked its way into everyone’s life. It’s almost impossible to imagine present-day money management, stock trading and banking without computers.
Atanasoff, a theoretical physicist, taught at Iowa State College (now Iowa State University), in Ames. In the late 1930s he began thinking about ways to simplify the myriad computations necessary for his research. Breakthroughs in electronics, such as the invention of the vacuum tube and condenser in the 20th century, offered professionals like Atanasoff the opportunity to discover new ways to compute faster and more accurately.
He got hung up on a basic problem: How do you get the machine to remember what it has already done? A long, aimless drive from Iowa to Illinois ended in a bar and led to an inspiration. Through electronics, he used a base-two binary format-which uses the digits 1 and 0 as its basis. In 1939 Atanasoff, along with graduate student Clifford Berry, devised a machine that satisfied the four basic precepts of computing. It had input and output. It held data that could be modified. It had a memory. And it worked.
In 1941 Atanasoff welcomed a visit from John Mauchly, an instructor at Ursinus College, in Pennsylvania, who had an interest in computers. Exactly what happened while Mauchly stayed with Atanasoff became the core of a lawsuit filed 26 years later. Did Mauchly examine the kitchen-table-size machine as it sat in a basement room on the Iowa campus? Did he take detailed notes? Did he use what he saw and learned as the basis for his own ENIAC (electronic numerical integrator and computer) system, which for years was credited as the first modern computer?
Mauchly said “no.” His widow, Kathleen, to this day defends his work. Mauchly still rates a place in history for the contributions he made with the ENIAC, the computer that helped scientists make the atomic bomb.
But in 1973 a federal court ruled that Mauchly’s 1946 patents were invalid. It said Atanasoff’s work preceded Mauchly’s by more than a year and that under patent law Atanasoff’s could be considered the first computer.
Ironically, a computer maker, not Atanasoff, filed the suit. Honeywell had tired of paying patent royalties to Sperry Rand, which had purchased the rights from Mauchly and co-inventor J. Presper Eckert. By the time the lawsuit was filed Atanasoff, distracted by his World War 11 defense work and by a lack of research funding to take his invention further, had already moved on to wealth and success inventing automatic packaging equipment. The inventor hadn’t even realized how much his computer and Mauchly’s ENIAC had in common until near the beginning of the trial in 1967.
Atanasoff got virtually nothing from the verdict-not even the public recognition he thought would be his. As late as the mid 1980s Encyclopaedia Britannica still referred to Atanasoff as a mathematician and physicist who “constructed what some regard as a prototype of an electromechanical digital computer.” But by 1990 the words “what some regard as” had been dropped. Other accolades from computer societies and the public have finally come, but only in recent years, and with companies similar to http://www.harddrivefailurerecovery.net.
Atanasoff is now 86 years old. He and his wife, Alice, who acts as his spokeswoman now that his health is poor, credit much of Atanasoff’s recognition to Clark Molienhoff, an Iowa journalist and Washington and Lee University professor, who championed Atanasoff’s cause in his book, Atanasoff.- Forgotten Father of the Computer (Iowa State University Press).
A native of Hamilton, N.Y., Atanasoff has called a rambling, wooded farm near Frederick, Md., his home for the last 26 years. The contemporary, concrete-walled house he designed is striking in its simplicity. Seated in his favorite chair beneath huge glass-paneled overlooks, he has a breathtaking view of the countryside.
Almost daily, he gets letters and cards from well-wishers around the world. Although the debate about his and Mauchly’s work continues in professional circles, the rest of the world appears to have settled the matter in Atanasoff’s favor-a little late, but still appreciated.
Back-shop operations, the tasks that ushered in computers at many agencies, remain the most popular functions. Word processing is No. 1: 100% of agencies with computers do it. Spreadsheets, revenue/income analysis, customer billing and client record keeping follow, with 89% to 94% each.
Media buying (69% of agencies) and trafficking (63%) are also widely done on computers, as are typesetting (64%) and comps and layouts (62%).
And the information boom has made strong inroads as well. Electronic information sources are available at a solid 62% of the shops. Market analysis is done by computer at 60% of the agencies; client-industry tracking is done at 59% of the shops; customer database accessing is done by 54%; and new-business prospect identification is done by 52%.
One-third of the agencies send messages on electronic mail. Some 27% are using their computers to create finished art, and a handful – 2% to 3% – are exploring the frontiers of graphics, desktop publishing and presentations and slides.
The end result is finding agencies becoming, literally, greater than the sum of their human parts. “The scales are being tipped,” says Beason. “With the competitive edge of having access to information, smaller agencies can compete with larger agencies. Agencies of all sizes can compete on the same footing.”
For Schell/Mullaney, the end product was like something out of a commercial. The client took a long look at the executions on the table – the finishing touches of which were downloaded from Brian Mullaney’s computer onto a laser printer only a half-hour before the meeting, just before the three musketeers piled into his Jeep and stormed across the East River to the client’s headquarters.
The client took another look at the work. Then he turned to the three ad agency executives, who were practically imploding from the suspense, and delivered his judgment.
“How did you do all this in four days?” he asked, incredulous. “How many people are in your agency?”
“You’re looking at them,” the agency executives replied. “But we’re thinking about hiring a secretary.”
“Don’t,” the client said. “Keep your overhead low.”
The agency – having gotten the account – went ahead and hired the secretary. But it also stocked up on computers.
It’s not alone. In the aftermath of the San Francisco earthquake last October, with the Bay Bridge shut down and disaster officials asking businesses to keep their employees from coming to work, creative groups in Foote, Cone & Belding’s San Francisco office kept in touch with each other, and kept working, at home on their Macintosh computers.
When Leo Burnett moved into new offices in Chicago last year, it invested close to $2 million installing fiber-optic cable to link 1,200 computers in the agency. At BBDO, New York, staff are producing virtually every animatic that is done with their computers. Lintas:New York keeps the names of every contact that its executives have within prospective clients on its in-house computer system.
At Ogilvy & Mather, 90 cents out of every dollar that will be spent on capital investments this year will go to technology, according to Louise Vondran, senior vice president, chief information officer at the agency.
The agency has been methodically bringing its departments on line. Media, for example, is “100% integrated,” and O&M hopes to have a computer on every desktop in its creative department by the end of the fourth quarter, Vondran says.
“Our mission is to let anybody sit at a terminal and access anything they want,” she says.
Including a new account.
“I’d like to see the day,” Vondran says, “when a client will be able to call an agency and introduce a new opportunity – a new campaign or a product – and have our agency be able to respond to that in a truly integrated fashion. Not the account people doing what they have to do, then turning it over to the creative people to do what they have to do – responding to the client’s request sequentially.
“I look to the day when all our people can be pulled together and work on the opportunity at hand simultaneously. There’s just an energy from everybody being in the same place at the same time.
“It’s not just being faster, cheaper. It’s being faster, cheaper, better.”
Faster, cheaper and better already are words agency executives readily associate with computerization, ADWEEK’s survey shows. A stunning 94.9% of agency executives polled report that computers have increased employee productivity.
Almost as many – 90.7% – say computers have given their agencies faster turnaround. A solid 81.4% say computers have increased their business. Some 77.3% say computers have increased profitability, and 65.9% report that computers have reduced their overhead.
Even account retention – the perennial thorn of ad agencies – is seen as being improved: 61.9% of those surveyed say computers have led to keeping more accounts. Smaller agencies see particular value here: Nearly 70% of them say computers help retention.
Meanwhile, the problems computers pose are not seen as being so significant. The biggest is vendor support. A hefty 34% of respondents say they were disappointed by the level of vendor support they had gotten. But only 22.6% say systems failures led to substantial losses in productivity. And only 16.5% say their agencies have found computer systems hard to use.
The most significant measure of computers’ acceptance in the agency workplace, however, may be the feelings agency staff have about the printed word and the electronic word. Considering the centuries worth of head start the printed page has on its diode peer, electronic information has made surprising inroads. Close to 9 in 10 respondents say their employees use information in both printed and electronic form. Only a slight majority say their staff prefer the printed word.
And 98% expect their agencies’ use of electronic information to increase in the next two years.
At the same time, 53.6% also expect their agencies to use more printed information – a tip of the hat, perhaps, to the glut of information created by computer databases.
More, to be sure, seems to be the theme for the future of computers in agencies. More uses: Over three-quarters of agencies surveyed say they are generating their own internal databases. Well over half of the agencies are drawing on external psychographic and demogaphic databases.
But with more uses come more challenges. For many agencies, the most significant one will be figuring out how to connect computers, both internally and externally, to clients, print shops and other sources. “Integrating all of the applications is going to be key,” says Mead Data’s Beason. “Like other companies, agencies recognize that they’re going to have to have their computers talk to each other.”
“Integration is the next frontier,” says Mary Henesy, president of Professional Software Systems of New York. “It is just beginning.”
To be sure, only 49.5% of ad agencies say they have linked their computers to other computers in their office. A slim 23.7% have linked their computers to computers in sister offices of their agencies, and 20.6% have linked their computers to clients’ computers. One-third of the agencies do not have their computers linked to anything, except many do backup their systems to external hard drives. This can great lessen the need for data recovery of any kind, as explained here.
Part of the problem is the variety of hardware systems and software applications that exist within agencies. Beyond making minicomputer and mainframe computer systems coexist, agencies have a dizzying array of different kinds of personal computers. Agency executives listed 16 different brands of personal computers.
More than half of the ad agencies – 55.2%, to be exact – have Apple Macintoshes; 43.8% have IBM PC ATs or XTs; 16.7% have IBM PS/2 computers; 14.6% have Compaq computers; 14.6% have PC clones; and 7.3% have Wangs.
Even greater, it seems, is the proliferation of different kinds of software. According to agency executives, Lotus 1,2,3 remains the program of choice in spreadsheets. Donovan Data Systems appears to hold a solid leadership position in media-buying software for mid-range to mainframe computers, and claims that agencies use this software to handle three-quarters of their media spending.
Media Management Plus claims the leadership position for media-buying software for PC-based agency computer systems, with an installed base of about 1,000 agencies.
Manas, Adware, IMS and Telmar are among other brands that have gained adherents in media planning and buying.
On the creative side, popular programs range from Aldus’ PageMaker, Persuasion and FreeHand programs, and Adobe Illustrator – the programs that led the way in desktop publishing – to Quark XPress, the Cricket line, Letraset Ready-Set-Go, Claris MacDraw to Microsoft Power-Point and Silicon Beach’s SuperPaint.
The result, for most agencies, is a tool box of programs that are drawn for specific, individual tasks.
While logic would seem to dictate that the number of software programs in the market would shrink, as the market matures and titles shake out, many observers expect the number to increase. Having installed their “base” systems, agencies are ready to move on to more sophisticated applications that will give them “more data and better ways to look at it,” says Judy Hollis, vice president of marketing for agencies for Marketing Resources Plus, the publisher of Media Management Plus. MRP, for example, plans in the coming months to release media-planning and buying programs for business-to-business magazines and cable television.
“We think there are still huge chances for growth,” Hollis says.
The growing number of hardware and software choices is, in turn, leading a growing number of agencies to bring in specialists, with titles like chief information officer, to manage their computer systems.
Ogilvy’s Louise Vondran, for example, on coming to the agency two years ago from a similar post at Johnson & Johnson, says she found “a tremendous plate of different software programs” in use at the agency. Gradually the agency has begun trying to “knit” them together. Creatives now, for instance, can trade information electronically with their counterparts in account management. But doing so takes more keystroke steps than the agency would like.
The collapse of Atlantic Computers, the world’s third largest computer lessor, has left users painfully aware that it is they who will pay for Atlantic’s collapse.
Champagne flowed, party streamers were tossed out of windows and Porsches slipped quietly out of the company parking lot to their drivers’ lavish homes. This was the picture painted by British newspapers, as staff members at Atlantic Computers PLC in Staines, England, learned of the computer-leasing giant’s collapse.
Within hours after the public address system announced to the staff that they were being laid off, there were reports that people walked out the door with PCs, customer lists and paintings. One director was quoted as saying that anything of value not physically locked down disappeared.
Within days, many staffers at Atlantic had found new jobs, mostly in the computer-leasing industry. According to one executive, some of the employees have gone on to other companies that sell leases similar to Atlantic’s Flexlease.
The crash of the world’s third largest leasing company and its London-based parent, British & Commonwealth Holdings PLC, has left in its wake complaints by investors of lost millions and criticisms of Atlantic’s standards that allegedly allowed companies to virtually write their own account of profits and losses.
Despite early warnings from the British legal community and other computer lessors about the validity of Atlantic’s Flexlease contracts, users found the price advantages of the Flexlease irresistible. Customers that included the U.K. Atomic Energy Authority, Hanson Trust and Penguin Books found themselves pulled along in the company’s rapid growth. Between 1983 and 1985, Atlantic’s pretax profits soared from about [Br pound] 2 million to nearly [Br pound]15 million (or about $3.4 million to $25.5 million).
Now users are all but forgotten in their misery amidst the corporate ruin. Most of them are saying nothing until they have worked out the scale of their potential liabilities, but privately they admit to anger at having been taken in by the company’s apparent success and its salesmanship.
Some users are more embarrassed than vengeful. They anticipate having to answer some difficult questions from boards of directors as to why, with all of the warnings in the marketplace, contracts with Atlantic were signed in the first place. And their unease will not be mitigated by allegations that some Atlantic staff members offered inducements to users to sign contracts. “I know that a number of DP managers are worried about their jobs,” says Michael Moore, chairman of the IBm Computer Users Association.
Besides the low initial rental charges on the Flexlease, users were attracted by its provision that allowed them to buy a more powerful machine about half-way through the term of six or seven years by swapping their contract for a new one. The remaining payments, in particular the higher sums due near the term, were written into the new lease. As before, the new lease allowed for low initial payments. The highest payments were due in the final two years, but as long as customers continued to upgrade equipment, the final year’s payments never arrived. One of the problems of this type of pyramid selling was that there needed to be a supply of new clients to defer Atlantic’s liabilities. Several users with large sites may have seen the trouble coming, as some of them were trying to negotiate their way out of their Atlantic leases at the time of the collapse.
Legacy of Anguish
The others are left with an inheritance of anguish, forced to prepare additional budgets requesting board approval for hundreds of thousands of pounds in extra lease charges. Users may also have to keep equipment they want to replace for longer than anticipated, and they may no longer receive any financial benefits when they finally return the equipment. They forecast that extra-leasing liabilities ultimately will appear on their balance sheets and that the leases will be changed so that they are less flexible. Users also predict that they will have to meet additional tax liabilities. Compounding the humiliation for many users are the warnings that emerged as long ago as 1980 about the dangers of signing flexible leases of the type Atlantic marketed. In 1987, the Royal Bank of Scotland was one of several users that went on record declaring that Atlantic’s Flexleases were unsatisfactory because the cost of upgrading was unspecified and left to negotiation with Atlantic.
Nevertheless, the Flexlease, devised by founder and motor-racing enthusiast john Foulston, continued to attract hoards of users even after his death in a crash in September 1987. In 1988, Atlantic wrote about [Br pound]400 million ($680 million) worth of Flexleases, each of them unique.
But as IBM’s leasing arm became increasingly aggressive, the growth in Atlantic’s business started to slow down. A growing number of users tried to cut their ties with Atlantic by exercising the lease’s walk option. This allowed customers to return their computer and terminate their agreement after four or five years, usually a year or two before the lease was due to expire. Under this arrangement, Atlantic undertook to settle the outstanding payments with the owner of the equipment, usually a third party such as a bank. This meant that Atlantic’s contingent liabilities grew as more customers exploited the walk terms. Profits began to dwindle, and on April 16 of this year, British and Commonwealth called in High Court administrators to take control of the company. What, Me Worry?
At first, users were not unduly worried. They had thought Atlantic would be bought by a large financial institution, which would honor the company’s commitments to users.
It was only at a recent gathering of Atlantic users in London’s Cumberland Hotel, organized by the IBM Computer Users Association, that users had their worst fears confirmed.
First, Atlantic’s administrators surprised delegates by grimly telling them the company’s affairs were so complicated and the liabilities so enormous that a wholesale takeover of Atlantic’s business was highly unlikely. In effect, the administrators said, users who had signed Atlantic’s unique Flexlease agreements would not be able to either upgrade their computers or walk away from the leases early, as Flexlease terms had stated. Some were in the midst of hard drive repairs, while many other systems had all out data loss or hard drive failure. See http://www.harddrivefailurerecovery.net.
It was after this conference that many of the U.K.’s 1,000 Atlantic users began to scrutinize their 2,500 leases with a diligence that perhaps had been missing when they first signed the contracts.
To some, it was a revelation that the lease comprised two sets of documents.
The first set was an agreement between a bank and the customer. In this, the user would agree to make leasing payments over a period of typically six or seven years. The user also signed a separate agreement with Atlantic that promised the upgrade and walk options. What many users did not realize was that, in the event of Atlantic’s collapse, the agreement with the bank stood firm, while the contract that it had with Atlantic was worthless.
As things stand today, users are tied to the original lease wit6 the bank for the full term of up to seven years, which means users cannot exercise the flex or walk options,
Car manufacturer Volkswagen AG believes it will have to make another year’s payments on an IBM 3081 that it want to dispose of now. When it buys a bigger processor, it also may have to pay second user software license fees because its new machine will be a second mainframe, not a replacement one.
Vetco Gray, a supplier of underwater oil-drilling equipment in Aberdeen, Scotland, faces a similar problem and is concerned that it may have to pay for equipment it has already returned to Atlantic. It fears that it will be unable to upgrade its IBM System./38 and start a new lease on a bigger machine next year. Instead, it may have to continue paying the rental stream, maintenance and service on a redundant System/38 for another four years.
Another Aberdeen-based company, Baker Oil Tools U.K. Ltd., is particularly concerned that payments it made to Atlantic may not have been passed on to the third party to the lease, which owns the equipment and is often called the head lessor. Baker returned a System/38 to Atlantic more than a year ago and replaced it with an AS/400. “We have been told by the administrators that they cannot give us any answers yet. They are still unraveling Atlantic’s affairs,” says a Baker Oil spokesman. Tony Collins is executive editor at Computer Weekly in Sutton, U.K.
SVG Lithography Systems Inc. has been awarded an optical development subcontract by a DARPA-funded program investigating microlithography using 193-nanometer wavelength light.
The four-year, $23 million Direct Excimer Processing Program, most of which is being conducted at MIT-Lincoln Laboratories in Bedford, Mass., hopes to spark development of a commercially available microlithography system using the “deep, deep ultraviolet” wavelength, said Arati Prabhakar, acting deputy director of the Defense Advanced Research Projects Agency’s Defense Science Office.
“The end goal is to have an optical quarter-micron system available to the semiconductor industry,” said Dr. Prabhakar. In addition, she added, the program is researching “process improvements to reduce the manufacturing step count and simplify processing — things like all-dry processing and some further-out ideas like resistless processing.”
SVGL, which is comprised of the former Perkin-Elmer lithography operations, will work with Lincoln Labs to produce an optical train that could be incorporated into the third generation of its Micrascan step-and-scan litho system, which is slated to reach market in 1994. Officials declined to reveal how large SVGL’s subcontract is, but called it “substantial.”
The operation started work on an earlier phase of the program prior to Silicon Valley Group’s buyout, noted Dave Shaver, leader of Lincoln Labs’ submicrometer technology group. “Lincoln had approached several U.S. companies — including GCA, Ultratech Stepper and Perkin-Elmer — in 1988 to ascertain whether any of them had any interest in 193nm work,” said Dr. Shaver. “Ultratech and P-E both did studies; for a variety of reasons, we chose Perkin-Elmer for the following phases.”
In the earlier phases, said Dr. Prabhakar, both firms submitted proposals explaining how they would approach the problem of producing a 193nm system. “Ultratech submitted a very interesting, very elegant design but they decided they couldn’t commit their resources to it,” she said.
Dr. Shaver said the design of the optics that will be produced was done at Perkin-Elmer, “by a number of their top designers there. It’s very elegant, but there are a lot of issues beyond the design itself. They will be looking at issues of fabrication and manufacturing, that will be wrestled with over the next few months to a year.” Funding for the program ends in 1992.
Unfortunately, the turmoil surrounding P-E’s divestment of the lithography operations slowed the program’s work. “They were in the study phase when they were put on the block,” SAid Dr. Shaver. “There was a significant delay because of the turmoil. WE wanted to let the dust settle before we proceeded.”
In addition to the litho tool work, Lincoln Labs is also pursuing development in processing and resist technology. Other subcontractors include makers of photoresists and glass companies but Dr. Shaver declined to identify any of them.
The government’s interest in 193nm lithography stems from a need for high-performance chips for military applications. “We want to try and develop a cost-effective quick turnaround process for 0.25 micron Defense Department circuits,” said Dr. Shaver. “There is concern about the feasibility of X-ray lithography for the modest-volume, high-resolution appoach.”
By shifting lithography techniques toward shorter wavelengths, chip makers can achieve finer resoluction with greater depth of focus. Systems based on the traditional standard G-line wavelength are beginning to be supplanted by machines using shorter I-line light; deep-ultraviolet systems, such as the first-generation Micrascan and the excimer laser-based systems produced by GCA Corp., use the 248nm and X-ray, but want to make sure it gets light, but many unsolved questions regarding photomasks and economy have slowed their acceptance.
Therefore, 193nm techniques could prove to be a valuable mid-term steps, said Dr. Shaver. “All the data isn’t in yet, but we’re more optimistic than when we started,” he said. “It’s high risk, high payoff. We want to avoid being in a holy war with 248nm and X-ray, but want to make sure it gets every opportunity. The only course is to pursue all three.”
One possible advantage to 193nm is that “essentially, the photons are energetic enough that you can carry out photochemistry that’s impossible at longer wavelengths,” said Dr. Shaver. “We might be able to do resistless processing, direst etch or direct deposit. The richness of chemical processing increases dramatically at shorter wavelengths.”
In addition, he said, 193nm gives a small additional amount of field depth over 248nm, and compared with X-ray, “the masks are incredibly easier to fabricate, using thick glass blanks.” X-ray masks require very thin membranes.
Some of the work done under the Direct Excimer Process Program is already being transferred to chip makers such as Texas Instruments, IBM and Intel, noted Dr. Prabhakar. “We’re working very closely with the chip manufacturers as well as the tool vendors; we’re seeing some excellent transfers,” she said.
But Dr. Shaver points out that “the proof comes in building a tool and doing it in a short enough time frame to make it a commercial success. If we’re still phototyping it in 1997, you won’t see many in fabs.”
IBM launched a new marketing assault on its vast but aging System/36 base last week as it brought out three CPUs at the low end of its AS/400 line, including its fourth so-called Entry model that runs S/36 software but which is the first that can be field upgraded to full AS/400 capability.
In addition to the introduciton of the new computers, as well as new sotware and peripherals, IBM issued a statement of direction, noting it plans to expand AS/400 performance with a field-upgradable model offering twice the current high-end B70 performance in mid-1991.
Industry sources said the moves signal IBM’s intent to boost the price/performance of its proprietary systems to keep pace with similar developments that have occurred in the so-called open systems market based on industry standard architectures and operating systems. Fueling IBM’s intent at the low end, they noted, is the sizeable base of the company’s earlier equipment, in this particular instance of S/36 machine, which the firm has previously said amounts to some 200,000.
The latest AS/Entry model comes about 11 months after IBM brought out the initial Entry models, (EN, October 2, 1989) that extended the S/36 architecture but incorporated some of the newer technologies found in the AS/400, which when originally introduced were intended to wean users from both the S/36 and S/38. The three earlier Entry models could not, however, be field upgraded to an AS/400 as can the AS/Entry Y10 brought out last week. The Y10, listing for $11,000, was accompanied by a new version of the S/36 operating system, SSP Release 6, priced at $1,195. a filed upgrade, consisting of a board swap, would amount to $7,500. The price of moving to the AS/400 OS involved in the upgrade would be $6,350.
The $11,000 price tag for the Y10 would yield a CPU with one megabyte of memory — with another megabyte optional — and 160 megabytes of disk storage, with the optional maximum set at 640MB. While IBM said it would continue to offer the AS/Entry S10 introduced last September, the Y10 appeared to offer better price/performance, as well as the upgrade capability. Both list for the same price for a one megabyte machine but the S10 offers less disk capacity with 105MB as standard.
The new Y10 can be upgraded to only the larger of the two new AS/400 computers fielded by IBM last week — the C6, which the eight megabytes of memory and 640MB of disk storage as standard lists for $17,500. The smaller C4, with the same standard memory and disk capacity, is priced at $14,500. The C4 also can use a less expensive version of the OS/400 operating system, priced at $3,750, compared to the $6,350 list on the C6′s OS, which as noted supports the upgrade from the Y10 Entry upgrade.
The C4 can be expanded to 12MB of memory and 960MB of disk capacity, while the C6 can grow to 16MB of memory and 1,280MB of disk storage.
In terms of performance, IBM rated the C4 at 1.1 times the power of the B10, the smallest of the original AS/400 family. The C6 offers 1.3 times the B10′s performance. Both, IBM said,are less expensive, with the C4, for example, coming in some 30 percent under the older B10′s price. As another yardstick, the C4 is said to support 14 local workstations and the C6 up to 54.
The C4, C6 and Y10 are all use a new 9402 system unit, a floor standing platform.
The AS/400 OS also came in a new release last week and was anther in what IBM said was more than 200 hardware and software enhancements across the AS/400 family.
In peripherals, IBM brought out disk and tape storage products and a memory expansion option for AS/400 systems.
Disk storage was increased up to 54.8 gigabytes with the new IBM 9336 disk unit for the B30 and up. IBM said customers can use the increased capacity for disk storage mirroring. The 9336, which uses compact 5.25-inch disk drives, is up to 38 percent faster and offers better reliability than the earlier 9332 and 9335 disk units, according to IBM. There are two basic 9336 models: the Model 10 with 471MB at $26,000 and the Model 20 with 857MB at $39,000. Additional capacity comes with add-on features priced from $8,500 to $30,000.
IBM also expanded disk capacity of the earlier AS/400 C25 by 50 percent to 3.84GB, enabling mirroring. In addition, main memory for the Model B70 can noww be doubled to 1982 megabytes. The firm said the B70 memory increase is made possible by a new 32-megabyte memory card which uses IBM’s new four-megabit 65-nanosecond memory chip manufactured in Burlington, Vt., representing its first use in an IBM computer.
The firm brought out the optional 9348 magnetic tape unit Model 2, a tabletop unit with a 1600/6250 bpi (bit per inch), 1/2-inch reel tape drive which allows the interchange of information from an AS/400 to another IBM or non-IBM system and also can be used to back up information, to avoid hard drive failure in general. See more here.
September 28 availability was set for the C4 and C6, OS/400 Release 3, the 9348 tape subsystem and the 9336 disk unit as part of new orders for the B50, B60 and B70. The disk unit becomes available October 26 for upgrade orders on those three systems. For new or upgrade orders for the B30, B35, B45, P35 and P45, the disk unit will be available next March 15. The Y10 AS/Entry is slated for this October.
Other new features fielded last week for the AS/400 line were an integrated Ethernet LAN attachment, a 16/4 megabit Token-Ring LAN adapter and TCP/IP TELNET support intended to improve the ability of the line to operate in computer networks with other manufacturers’ equipment and to function as a client/server.
New languages included FORTRAN/400, RM/COBOL-85 and Procedures Language 400/REXX.
The ASCII facility, which is expected to be completed next year, will house efforts to research multimedia technology and establish audiovisual facilities; a ground station using a communications satellite transponder is also being planned. ASCII is hoping that its media technology laboratory will be a resource not only for Japan but also for multimedia industries in America and Europe as well. One of the most important devices for multimedia may be the CD-ROM. Several computers now contain a CD-ROM in their standdard configuration, including the FM-Towns from Fujitsu Ltd. and the PC8801MC and PC-Engine from NEC Corp. The PC-Engine is a game machine whose core concept is expandability. It was designed for the ease of connecting various peripherals. The PC-Engine’s CD-ROM drive accomplishes the low cost required in a game machine by sacrificing error correction and other features included in previous CD-ROM drives. As a result, the PC-Engine has sold 2.1 million units since hitting the shelves this past July. Not only games (the leading edge of the multimedia wave) but also karaoke and other types of entertainment applications are being marketed. The PC-Engine can be connected to the NEC 8801MC, which also incorporates a CD-ROM drive. However, the connectability reaches only to the hardware level–there is no software compatibility. NEC, with its PC9801 Series, controls about half of the personal computer market in Japan. For this reason, all eyes are on NEC, waiting to see in which direction it will throw its considerable weight within the multimedia market. NEC has recently contributed an NFSA (New Extended Standard Architecture), high-end version of its PC9801 Series PCs. The NESA is a 32-bit bus architecture, widely touted as the standard for multimedia.
NEC is also at work on developing CD-ROMs that can be used by both the PC9801 Series and the PC8801 Series computers. The Fujitsu FM-Towns, a 32-bit personal computer, was announced in April 1989. The FM-Towns was the first Japanese PC to include a CD-ROM drive, and it was expected to develop into a hardware platform for multimedia. But sales figures so far have been disappointing. Fujitsu sold only 60,000 FM-Towns units last year, less than one-tenth the volume for the PC9801 Series. Sony Corp. also has released a personal computer incorporating a CD-ROM in its standard configuration, the QuarterL (C Model).
The QuarterL is being marketed to businesses, with a view to connect to POS systems and to develop sales-monitoring and order-monitoring systems. Besides the C Model, Sony is expected to release an X Model to support the CD-ROM XA standard. Fujitsu and Sony are developing a common library of CD-ROM XA applications. At this point, a single standard has been established for the data portion of CD-ROM XA, but the program portion, which actually controls the computer, has not been standardized. A common procedure for software development will have to be established in order to obviate the need to modify the software for each type of computer on which the multimedia is to be used. The library is expected to be made available at no charge to other hardware manufacturers, software companies, and publishers later this year. Manufacturers of facsimile machines are also getting into the act. Canon Ltd.’s G3/G4 facsimile can operate on a local-area network. This fax incorporates an Ethernet interface and LAN Manager, allowing it to send and receive faxes and documents from workstations and to input documents from image scanners.
Canon is focusing on developing the multimedia market by way of office automation products. Last year, Apple Computer Inc., which hopes to expand its share of the Japanese PC market, began selling its Apple CD SC, which can be used with the Macintosh and Apple II GS machines Apple’s share of the Japanese PC market by the end of 1989 was said to be approximately 3%. However, figures for the first quarter of 1990 are up 80% over the same period last year. Apple is determined to strengthen its Japanese position even further, with a goal of reaching 10% market share this year.
As Apple increases its share, the Apple Media Control Architecture, which integrates multimedia, will become a major force in multimedia in Japan. Japan’s computer manufacturers possess world-class technology and are capable of imparting enormous influence on the world’s computer industry. However, with regard to multimedia, it is clear that Japan is far behind the level achieved in the U.S. In particular, there is an unsatisfied demand for rich multimedia application software as well as data recovery applications like this one.This need cannot be met unless software developers are joined by producers and musicians who can create professional-quality multimedia. As hardware penetrate increases, analysts expect applications, currently sold by hardward manufacturers, to become available through bookstores, computer stores, and yoy stores.
If you’re smart, you know the solution is 100 percent pure Linux. All the internal network from the routers down to the Web servers is 100 mbit/sec (Fast Ethernet). From the Web servers to the Network Appliance file servers the wires run a cool 1000mbit/sec (Gigabit). The first time the gamblers hit a Linux machine is when the TCP/IP packets arrive at the firewall.
This computer is a Compaq Alpha workstation with 256 MBs of RAM and it runs SuSE’s 7.0 distribution. This being the only non-Intel machine in the solution, one might wonder why. The reason is security. Most buffer-overflow exploits on Linux are pre-compiled for the Intel x86 architecture. Running a platform not able to execute them will statistically diminish the attacks by script kiddies. One more reason is that this is the only non-redundant point in the whole network, and Compaq’s Alpha machines are just more reliable than other x86-based PCs.
You're going to need some of these.
The firewalling itself is IPCHAINS-based. Basically, the only thing allowed to come into the network is an http’d packet addressed to port 80, everything is discarded. In the outgoing direction, only packets back from the Web servers are allowed. The rule set for this is very easy to configure and the security is no less than any of the expensive commercial firewall products.
Here are descriptive pages for users not able to enter the restricted Web area.
This Web server is actually a cluster of two Web servers running Red Hat’s Piranha high-availability cluster.
It does this is by running a watchdog daemon every few seconds to make sure the other Web server is alive. If it isn’t, the virtual IP address of that machine is switched to the local machine and packets arrive there henceforth. The two machines are Netfinity 4000R rack-mounted ultra-thin PCs, each with 1 GB of RAM and Apache 1.3.14. The Linux running on these machines is Moshe’s special edition, consisting of kernel 2.4.0-pretest7, the logical volume manager, reiserfs (a journaling file system for Linux), and all unnecessary daemons and programs are removed. I opted to implement a pre-release kernel for a good reason.
The 2.4.0 Linux kernels have a built-in kernel-based Web server. Tests have shown that over 80 percent of the time needed for serving a traditional static Web page is due to the heavy switching from kernel-space to the Apache user-space. The kHTTPd kernel-based Web server executes purely in the kernel, and does away with all the switching. For cgis, the Apache server with modperl for Perl processing is still being used.
The result is an extremely skinny OS and Web server environment, leaving all the RAM for the buffering of the Web pages. Said pages are actually never read from disk. I attach a RAM disk of 100 MBs at boot and copy all the Web pages and cgis to that disk. Apache will reads data from this RAM disk, further reducing time-intensive I/O. The Apache logs are written to a second 50-MB RAM disk and a regularly running script copies the log to disk asynchronously and flush the RAM disk content. The result is a Web server cluster able to serve over 2,500 pages per second.
Once users, decide to actually enter the casino and register themselves as authorized users, all further Web pages are served in SSL (Secure Socket Layer) mode.
Nowadays when you lose data that is not the end of your life. We all understand that data can be lost in one way or another as in here. Some of these companies are very efficient in carrying out this work of data recovery. Such companies have developed very comprehensive strategies to overcome the woes of data recovery. They can recover data across an array of devices and even all types of operating devices. These companies can recover data from any kind of laptop, server, network, desktop or storage device imaginable. One of the most reliable companies is Kroll Ontrack. This company is on the payroll of most Fortune 100 companies in the USA. This is a gesture, which shows that it is extremely reliable.
Professional data recovery usually takes place in clean rooms like this.
The above named companies have a reputation for proper industrial security. In the past, it was recognized by the USA Defense Department for the last 15 years. From this kind of reputation, I recommend to you this company to solve all you data recovery problems.
Hard drives can lose data due to them crashing, being broken, if they are dropped, damage by fire, water damage, virus damage and a crushed personal computer. Always look for a company that can manage to recover data from all types of hard disk manufacturers.
Since data recovery is an action that occurs impromptu, that is it occurs when people lose data unexpectedly there is need to take precaution against the exercise by putting up a stable back up against complete losing of data when the system crashes or when some form of misfortune begets our machines. All kinds of data should be stored in hard copies as well as soft copies so as to minimize the risk of completely losing data in case the hard drive loses the data abruptly. On the other hand, we can minimize the risk by storing it in as many mediums as possible as a safety precaution. For instance, the data can be stored in tapes, compact discs, diskettes, hard discs and any other data storage media for recovery whenever need arises. These precautions will go a long to reduce the costs and inconveniences that are associated with data recovery process.
We should also develop the habit of servicing our machines on a regular basis as a maintenance practice so as to minimize chances of breaking down. The maintenance involves installing anti-malware programs on computers as well regular maintenance and servicing of computer hardware to avoid complicating the situation.
How do you bring UNIX client-server applications to Novell NetWare networks? In the past, users have had two alternatives: run Portable NetWare (now called NetWare for UNIX) on a UNIX server, or have separate servers on the net running NetWare and UNIX. The first way is slow; the second, expensive. Now, NetFrame Systems offers a new approach: a multi-processor server that runs NetWare on one processor and UNIX on the others. NetFrame went public in June, the first “superserver” company to do so. I recently visited the firm in Milpitas, CA, to observe its innovative architecture. What I saw could well become a mainstream solution for UNIX on Novell networks.
NetFrame was rounded by Carl Amdahl, son of Amdahl Corp. founder, Gene Amdahl. The younger Amdahl, quite literally, learned mainframe architecture design at his father’s knee. In the late 1970s and early 1980s, Carl rounded two suppliers of large-scale systems: Magnuson and Trilogy. (Magnuson enjoyed a successful IPO; Trilogy was a huge debacle.) He rounded NetFrame in 1987. Inspired by the IBM System/370, NetFrame reduced the I/O channel and other elements of mainframe architecture to VLSI and adapted them to an Intel processor. The resulting file server has very high I/O throughput while running standard shrink-wrapped NetWare or OS/2 LAN Manager.
In addition to this main “file server” processor, a NetFrame can have up to four application processors, also Intel. The file server and application processors are like independent file and application servers on a network, with a key difference: they communicate not through the network, but through shared memory. Thus, application requests to the file server can be made much faster than they would between conventional servers. Adding UNIX servers “on the net” becomes a simple matter of adding processor boards.
Univel UnixWare, SunSoft Solaris 2.0, and NetWare with NetWare Loadable Modules (NLMs) can be run on the application processors. As many as four copies of UNIX and/or NetWare may be run concurrently.
The NetFrame approach will be attractive to end users who want to bring UNIX applications to NetWare environments. It will also allow UNIX ISVs to address the NetWare market without having to convert UNIX applications into NetWare’s proprietary NLM format. NetFrame servers are available today with NetWare or 0S/2 LAN Manager. The two flavors of UNIX will be available next year.
Anyone who regularly reads this column knows l have spent much of the past year calling on the UNIX community to pay serious attention to desktop UNIX. Not only have lexhorted vendors to develop small, friendly, shrink-wrapped versions of UNIX, but I have discussed at length the market effects on companies such as Santa Cruz Operation (SCO), if UNIX does or does not meet the challenge.
The good news is that UNIX vendors have perceived the same need ldid and have understood that the desktop UNIX must look and feel a whole lot different than its predecessors. USL and Univel have made surprisingly giant strides in this direction. SCO, in its own way, has continued to improve its high-end product, ODT. What Sun is doing, other than standardizing on SVR4, is open to conjecture, but the Solaris’s portability to Intel systems shows an appreciation of this platform.
For 1992, UNIX’s understanding of the strategic importance of the desktop rates very good. The next question is: Can UNIX market its wares?
The marketing arena is critical. Here, the main opponent is Microsoft and its vast marketing resources. For example, the recent UNIX Expo in New York had more vendors and more attendees than ever. The mood was by and large upbeat. By all accounts, one would say that despite the recession, the UNIX community is thriving. Yet there was a malaise. Everyone was alarmed about Microsoft’s Windows NT. Its approaching footsteps could be heard.
In this issue, John Chisholm (see “Currents”) offers a series of reasons why NT is better technology than UNIX; he then opines that the competition with NT will benefit UNIX. I could not possibly disagree more. I do not believe NT is better (or even close), and lam most unconvinced that head-to-head competition will benefit UNIX.
Let’s examine technology. After months of saying the opposite, Microsoft has backed away from its claims that NT is multiuser. NT is not; it is multithreaded, however. (This means that NT’s threat to SCO-style sites is greatly lessened.) What networking will be built into NT, other than Microsoft’s less-than-successful LAN Manager, needs to be seen and tested. It certainly won’t have the blessing of archrival Novell nor the field testing of UNlX nets. Finally, Microsoft’s admission that it will not be fully compatible with existing MS-DOS and Windows applications means that even a smooth binary migration is not possible.
Add to this that NT will be validated by only six months of beta test (as opposed to years of testing for UNIX) and you start to see a product that looks fairly meek technologically. Ain’t no one in Redmond dragging users into the new age of operating systems.
As a result, it is difficult to believe that anyone would abandon UNIX systems to migrate to a manifestly less robust NT. Once the investment has been made in UNIX technology and the concept of open systems, abandoning it all for a closed, less technologically advanced solution makes little sense. NT will not change UNIX’s existing base.
What NT will do is divert potential customers from UNIX. Since UNIX, like all industries, needs new customers to survive, the threat of NT is that it will slow the growth of UNIX, just when UNIX really had begun to compete in the business arena. How much of a threat this is will be the topic of my column next month.
As NT boldly goes where only Unix went before, hopes are high for a distributed computing environment that will work happily with existing PC hardware and software. But can NT really supplant Unix, with its 25 years of proven network reliability?
Until now, serious networked applications users have turned to Unix to find tools capable of doing the job. Unix may still not have found favour on the desktop, but it was designed to be both multitasking and multi-user — making it great for the network, despite problems getting Unix and PCs to co-operate.
This paradigm is now under threat. The growth of Microsoft Windows, along with the company’s promise to deliver the 32-bit NT operating system, has raised users’ hopes that a genuine alternative to Unix is close at hand — with backward PC software compatibility.
Yet Microsoft will be at least four months late with NT. The product is a lynchpin of the company’s future success and its push into higher computing. Even taking into account the announced delay, NT has been under development for some time. Bill Gates, Microsoft’s CEO, recently revealed that the NT project actually started before IBM and Microsoft began joint work on OS/2. As one analyst said in a recent issue of PC User, Unix has been around for a quarter century and is only now coming up to scratch. Microsoft will have an almost impossible task emulating that progress with NT in a much shorter time.
So, although many corporates are basing their future plans on Windows and NT, it may take years before they have a robust enough environment to ignore Unix for good. This raises some questions about the immediate server software buying plans of large corporates. With so many pledging support for NT, often because the software comes from Microsoft and fits so closely with Windows, it will be interesting to see how they cope with waiting until NT is ready to do the job well.
Microsoft has always intended NT — written in the C language – to be the version of Unix Gates actually wanted, instead of the versions offered by older vendors. And with companies such as Sun Microsystems working hard to allow Windows applications to run on Unix without a trace of Microsoft systems software on the machine, it seems that Gates’ giant could be treading on difficult ground.
The struggle NT will face getting onto the corporate server was illustrated recently by a Unix International report, which showed that Unix and NT will be strong on the server and desktop, respectively. While there will be some crossover, this independent survey showed that Unix will continue to meet the demands of users who want a clean and reliable distributed computing system.
One significant applications area where Microsoft has to show convincing support is relational databases. Many servers on networks are set up merely to house large data repositories, liberating hard disk space on the client desktops. Oracle, Sybase, Ingres, Informix and other high-end database vendors have developed clean, fast products which principally run on Unix. Microsoft’s Open Database Connectivity (ODBC) interface, which will slowly be built into its products, must have the support of these vendors. Microsoft already has a significant alliance with Oracle over ODBC and is wooing the others. The ODBC connection will allow information to be shared between databases and other programs in the Windows environment.
Yet Oracle has also demonstrated its database breaking new ground in transaction processing, working with Compaq hardware and Santa Cruz Operation (SCO) Unix. In fact, Compaq and SCO have signed a European OEM agreement to bundle their products. Paul Oliver, SCO’s strategic accounts manager, predicts that 30 to 50 per cent of Compaq’s new ProSignia XL servers will ship with SCO Unix. SCO also has full XPG.4 branding from the X/Open organization, one of the highest open systems markers. Microsoft remains scornful of X/Open and similar bodies, but may find this attitude disadvantageous when it comes to selling into the upper computing echelons.
It has always been clear that Microsoft’s plan to tackle both the front and back-ends of the operating system world is ambitious. In PC User recently, Microsoft’s vice-president for systems strategy, Jonathan Lazarus, argued that product shipment volumes speak for themselves. This is all very well, but it doesn’t tell the whole story.
Unix might never be a major force on the desktop, but it won’t lose its customers in the server sector easily either. So, the minority of Unix users on Intel-based machines pinpointed by Microsoft are really a significant population of MIS decision makers. They are more conservative, have more to lose, and may not be so taken with the revolutionary spirit that helped Microsoft conquer the PC market.
Undertaking data recovery is not always the most essential act even if the data appears to have been lost from our computers or from other storage facilities, and it is for the reason that some experts have the ability to look at the drive that stores data, and decide whether it is a temporary problem that needs to be fixed or not. This means that at instances data can be retrieved in a very simple manner normally by fixing the underlying problem that may be denying access to the data. We therefore need to seek for the services of a company that highly support honesty in customer service by educating its customers about when and why data recovery is necessary. Check out further information here. This will enable companies as well as individuals to save a lot of money that is usually incorporated into the data recovery exercise.
Data recovery remains critical to companies.
As a rule, machines can break down making them appear to have lost the data rushing to data recovery may not be the solution to gaining access to the
data. This calls for high expertise to analyze the real problem behind the cause of the data loss before undertaking the process without prior information. Therefore, if you need to recover important data from hard drive, call in an expert immediately.
It should be known to everybody that when you store data on a hard drive it is not 100 per cent safe. It can get lost very easily. There is a great difference between a hard drive and a CD. Hard drives are on constant use and hence very mechanical. Due to their constant activity, they are subject to easy breakdown.
One of the greatest threats to the hard drives is the people who are in touch with them. In this case, you and me. We can easily erase any data stored in a hard drive. This will bring a lot of frustrations to our life. Maybe you have lost very crucial data. It will sometimes be costly to start compiling such data from scratch or sometimes it will be impossible.
You should be assured that when you lose your business data is not the end of your career in business. There are people out there whose business is to recover you lost data. When your business was faced by an inferno or one of your employees accidentally deleted your files. These companies that recover data from hard drives have brought hope to the lives of many companies or corporations that could have face collapse.