……..What was massive RAM a decade ago is now too small. Letting the operating system and other software components drift out-of-date also causes unexpected behaviours that appear after an unrelated update.
Dave
Is this because software developers:-
- insist on adding functionality very few people want?
- have lost the art of writing efficient code
- are getting backhanders from computer manufacturers to make us throw away perfectly good machines
I think the word that sums it up is Bloatware
Yes, no, and maybe!
Much depends on user need, and I regret to say that those of us who would like hardware and software to stop changing are a minority. The market is driven by new purchasers, for many of whom previous generation technology isn’t good enough. They’re into playing photorealistic games, generating bit-coin, AI, CAD, video editing, SDR, CGI and a bunch of other applications for which high performance is essential. Although yesterday’s hardware is plenty ‘good enough’ for browsing, email, text processing and much other classic computer work, it performs sluggishly or not at all when anything demanding is loaded.
One thing users dislike intensely is sluggish response times, which is why 15 years ago it was best to avoid cheaper laptops, even for basic work, because they were so slow. Coughing up for a faster processor and more RAM, meant basic applications flew, giving a much better user experience.
In the good old days, memory was hideously expensive. As a result, all computers were bottlenecked by having to manage it carefully, with much time wasted by swapping data and running processes, to and from hard drives, or even mag tape. Memory became relatively much cheaper as time passed, so for most of my career, the obvious answer to any performance problem was ‘add more memory’. The improvement often made it possible to load the computer with more work, requiring even more memory…
When I first programmed a mainframe, a monster in a special building, it only had 192k words of 24-bit memory, (roughly equivalent to 768 kilobytes). For each read/write I had to manually allocate a memory buffer big enough to hold at least one record, ideally several, but only if this was available. Thus, processing would involve reading one record only at a time from a tape, processing it, and then writing the one record update to another tape. Much faster if enough memory was available to buffer many records, because a single read would take many records off the tape in one go, so the tape machine did a lot less start/stop operations. Any program that needed more than 32k words of memory, was analysed and then carefully programmed to manage memory vs performance. The terminology have been just us for fun, but I created many a FART, that is a ‘File Access Requirements Table’. These showed how many read/writes each device would do, how often (identifying steady vs burst activity), and the volume of data moved. The FART would then be used to define buffer size, noting that a program that had to finish over a weekend, better not still be chugging away on Monday!
Having a lot of spare memory allows the operating system to improve performance on the fly, almost making FARTs and similar unnecessary. Rather than requiring the programmer and operators to manage memory, an operating system with excess memory will generously allocate memory to fulfil the needs of all processes, and then use what’s left to buffer entire files. Back in the day a 9-track computer tape held about 100Mb total, and had to be read in blocks of about 2kB. Now, on receiving a request to read a 2kB record from a 100Mb file, my 32Gb Ubuntu machine would probably read the whole file into memory on the assumption the more reads will follow. On an 4Gb machine, there’s almost no opportunity for the operating system to do this, and 8Gb isn’t much better. 16Gb is a rule of thumb minimum, and these days I go for 32Gb.
Whether a product needs new features or not depends on what it does. Been a long time since I needed anything new in a word processor, but plenty of opportunities in 3D-CAD.
Whilst there are plenty of good reasons for needing faster kit with much more memory, this doesn’t mean Duncan is wrong about Bloatware. Many a good product died from feature bloat after sellers found themselves desperately trying to keep up with the competition. Faced with a competitor with a better user interface and exciting new features, the older product could only be ‘improved’ by tacking new features on, when what was actually needed was a complete re-write. Thus loyal customers were presented with changes they didn’t want, an ever more clunky interface, and maybe a shower of bugs and security problems.
The need to write efficient code has been much reduced by compiler technology. In my youth compilers generated simple slabs of boiler-plate code generalised to work reliably. An assembly programmer could easily improve early compiled code. This changed rapidly when compiler writers tackled efficiency. Compilers are stuffed full of optimisations, many of which are too labour intensive for a human to do. This example might do two passes through the entire program, which could be a million lines of source code:, depending on whether or not another optimisation has been done. “Perform a forward propagation pass on RTL. The pass tries to combine two instructions and checks if the result can be simplified. If loop unrolling is active, two passes are performed and the second is scheduled after loop unrolling.” Nowadays, programmers are discouraged from wasting their time on micro-efficiencies. The most economic answer is to provide plenty of memory and apply the right algorithm. Plenty of exceptions, but micro-efficiencies are only applied to bottlenecks that emerge at run-time. A tool like cachegrind identifies which instructions are choking, and the programmer looks at the source to see what might be done. As efficiencies are mostly important in system and microcontroller code, many application programmers never get involved in efficiency or performance.
What Duncan and I would consider a ‘perfectly good machine’, my nephew wouldn’t want at any price. I do dual-screen 3D-CAD with a graphic accelerator card nephew rejected 5 years ago for being too slow. Too slow for him, but it easily does what I want, which doesn’t include rendering super-high speed graphics! Sadly for us, Duncan and I are very much a minority market, and it’s what folk under 50 are doing that drives the world.
Dave