Transcending POSIX: the end of an era? – OSnews

2022-09-24 04:13:30 By : Ms. ellie Hao

In this article, we provide a holistic view of the Portable Operating System Interface (POSIX) abstractions by a systematic review of their historical evolution. We discuss some of the key factors that drove the evolution and identify the pitfalls that make them infeasible when building modern applications.

Some light reading to start the week.

Follow me on Twitter @thomholwerda

Oh POSIX, what a love/hate relationship!

It’s helped a lot with application porting across platforms, but at the same time I think it managed to discourage innovation in favor of the standard.

It’s kind of a right of passage for indy operating systems to become posix compatible and make it possible to port loads of software. While that’s a good thing, it kind of means that the unique traits and improvements that alternative operating systems bring will go unused by the software community. While linux itself carries enough weight to push linux features to mass adoption, indy projects lacking a critical mass are strongly pressured into “follow the leader” roles leaving merit under appreciated.

The POSIX stuff lets you do the boring stuff (General I/O – to and from the disc, to and from a terminal of some sort) without having to worry about it for each and every platform. Sometimes it’ll come along and you’ll get something like threading added.

Yes, there’ll be some things out there that’ll need to go outside that – but no matter what initial framework you put in place on a general operating system, they’ll appear anyway. (I mean, does POSIX do graphics, never mind the accelerated ones mentioned?)

The POSIX stuff lets you do the boring stuff (General I/O – to and from the disc, to and from a terminal of some sort) without having to worry about it for each and every platform. Sometimes it’ll come along and you’ll get something like threading added.

My point was about the way POSIX shapes both operating systems and applications around itself rather than other interesting possibilities. The POSIX conventions are just about universally integrated into a myriad of operating systems and programming languages. It defines the primitives around which everything is based. Of course that is its strength but at the same time it means we’re forever stuck implementing everything around those primitives rather than different ones. If we had more practical leeway to break with the “boring” POSIX conventions, it would open up new innovative opportunities. I’d like to see general purpose IO re-envisioned around newer more powerful concepts like databases, transactions, SMP & distributed processing. Streams shouldn’t be limited to text, I should be able to pipe entire records and other data types between applications without twiddling with text conversions. We could recreate what POSIX does for text streams, but with high level records and objects. Not only could input and output be iterated like a stream, but we could have bidirectional access to data sets. Instead of opening files that a programmer has to read/parse into structures and then save back out, applications should have more direct access to data in object form without that boilerplate conversion we’re all so familiar with. This could give our applications more power out of the box, while solving issues of data conversion, concurrency, clustering, transactions, locking, etc. Such features should be truly integrated between applications and OS such that using them is trivial and transparent without a second thought. There’s no reason we can’t built new improved primitives

I enjoy this topic a great deal and I see so many opportunities for improvement and innovation with new operating systems and languages, but I don’t think it’s feasible due to how ingrained traditional POSIX conventions are everywhere. Everything from operating systems, libraries, languages, and applications depends on it. No matter the merit, it would be like starting over. But the software industry is notoriously stubborn at replacing things that are already considered good enough. There are people who try here and there (plan9), but in the grand scheme of things so much money has been invested into incumbent tech and our industry’s momentum is so great that it’s inconceivable that new primitives can significantly replace the old ones.

Yes, there’ll be some things out there that’ll need to go outside that – but no matter what initial framework you put in place on a general operating system, they’ll appear anyway. (I mean, does POSIX do graphics, never mind the accelerated ones mentioned?)

It’s definitely possible to build high level primitives on top of basic ones. We do these abstractions in software all of the time, but the result is that we have millions of those abstractions in the wild all more or less doing the same things (like parsing command line arguments, saving data into files, transferring data over networks, etc). We’re constantly reinventing the wheel, even when we use frameworks. More sophisticated primitives would not only save tons of work and create more consistency, but it would enable us to integrate applications in new ways that haven’t been possible using basic primitives.

The more rich the abstraction, the more innovation is possible and conversely the more basic the abstraction, the less we can do without also touching millions of applications based on the primitive. Consider that the reason we’re able to innovate in the file system space today is because virtually all applications are based on the POSIX FS conversions. This means everything behind the file system primitives can be swapped out, improved, innovated upon (see ZFS, btrfs, FUSE, etc). I’d love to see this kind of innovation happening everywhere, but this is deeply controlled by the abstractions we have, which is why I say POSIX inadvertently holds us back at times.

I don’t disagree – a richer and more sophisticated basis to do things would be good.

The problem is, as you mention, the wheel keeps getting reinvented. Anything you standardise has to be written first – so the chance is that it’ll get reinvented in itself.

Sometimes there are valid reasons for this – maybe the old wheel doesn’t support something that’s really, really needed or somebody comes along with something because they haven’t seen anything that fits their need, (even if it exists) sometimes disagreements over licences. (Gnome is the biggest project I remember being spawned over licence disagreements – but usually it’d have caused QT to have been supplanted instead of the whole stack)

Sometimes it’s less valid. Cult of personality, Not Invented Here syndrome.

That’s before getting it put into a standard

Getting something accepted by, say, POSIX will involve a lot of companies (And the people in those companies) agreeing to it. (Companies because POSIX is a costly thing to get involved with)

The more sophisticated your primitives are, the more there is to disagree over and the less chance of it being accepted.

I agree your post overall. I wanted to respond to this sentence:

The more sophisticated your primitives are, the more there is to disagree over and the less chance of it being accepted.

I think it depends. I’d agree with this strongly today especially because we’re in a mature market. However at the beginning I feel there would have been a bigger chance to shape the future of tech into different shapes before the technology had been shaped. It’s much harder to change/replace once it’s adopted en masse.

Look at networking standards such as 32bit ipv4, the mess which is smtp, etc. Modern engineers would be in a better position to optimize designs thanks to hindsight, but if the new primitives aren’t directly compatible with the old ones the reality is it’s a long hard struggle to get significant buy-in even if the experts do technically agree on the improvements. In the past I’ve used the metaphor of standards acting as cement locking us into certain positions that are easy to get into but hard to change.

Yeah – IPv4 having 32 bits was a bit of an underestimation on the number of devices out there, the length of time that it’d actually last or maybe a canny guess as to how much power and memory the routers would need. (Probably not, though, giving it’s age)

IPv6 had the opposite problem – stalkery marketing types using the IPv6 address as a mechanism for figuring out exactly who you are and tracking you via your IPv6 address. Still, at least IPv6 binned the options section of the header.

I agree with your comment about SMTP. I’m surprised that nothing has come along and baked encryption/signing and identity confirmation into email. (I’m not counting PGP/GPG because it’s designed to use email as a delivery mechanism and not as part of the email standard itself. It’d be complicated. Somehow the receiver would have to register a public key somewhere for the client on the sending machine to pick up – and then there’s all the fun of having incoming servers in companies to verify viruses and the like on the way through which’d probably make that pointless.. *Sigh!*)

Much less combined sending and receiving. (I mean, they’re not entirely symmetric – but we still have both POP or IMAP? 3 different standard mechanisms for transporting mail)

You must be logged in to post a comment.