Powerful languages have a problem. They allow things to happen that the language designer hasn't even imagined. Some of those things sometimes allow mean-spirited sorts of people to attack servers, steal credit card numbers, and make general malicious mischief.
So PHP version 5.3 needed a band-aid, to help the naive web programmer avoid blowing him/herself away with good intentions poorly implemented. The band-aid was called "suhosin".
Unfortunately, the three German engineers who developed suhosin seem to have gotten busy doing other things, according to this post at Arch Linux. And the current suhosin doesn't match the current version of php5. [update: If you fail to follow all of the links Pierre provides, at least look at this mailing list post from one of suhosin's developers.]
I had been thinking about brushing up my php skills, so I had installed php. With the upgrade to 5.4 in Debian wheezy, suhosin doesn't load. Instead, it fills my error log files with complaints of incompatibility.
So I checked, and nothing else gets removed when I remove php5. So I removed it.
When I really need it again, I'll install it again. Maybe by that time the guys who run php will have folded all of the functionality of suhosin into the language itself.
But this is not a solution, it's a knee-jerk reaction. More first-aid fixes that don't really do the full job.
This highlights one of the problems in software architecture: The power of a powerful language is in its expressiveness. To the more expressive a language is, the fewer limits there are to the things which can be expressed in it. But security in current practice requires setting limits. We need to give the programmer power, but we need to take power away from the end user.
There is an inherent conflict here. I mean, sure, we could go the direction taken with Java, using execution policies to tune the expressiveness available in the end-user's context, but that has its own set of traps --
- Will some of the programmers remember to set up the policies?
- Do the programmers understand how the policies are used to secure the system?
- Does the policy end up preventing the end user from doing important things?
- Do legislators understand the interaction with law and regulation and the potentials for abusing the laws and regulations?
- How does the government protect the people's security without inducing more chances for treacherous abuse?
- And how can a government make the people secure without excessively limiting their freedoms?
The answer of the US Constitution was "Use checks and balances and keep it simple." Both of these principles have been long ago set aside as legislators and special interest groups press for responsive government.
Is there something wrong here?
Can we as general members of society learn enough about systems to pare back the legal kruft that is currently overburdening (and overly burdening) society (and is a primary cause of budget problems, not to mention the bureaucratic abuses that show every sign of continuing to increase)?
Can these principles be applied to computer systems? If they can, how?
I think they can, but I'm not sure anyone reading this would understand. (I'm not intending to insult. No one has time to study every necessary subject, and this particular subject has been advertised by certain special interest groups as unnecessary.)
And it seems no surprise to me that the current trends in systems design seem to be going towards increasing complexity in the provided systems, which parallels the political atmosphere, and is exactly not the solution. Precisely what we should not be doing.
We put power in the end users' hands (quite literally with the new crop of portable information devices that match the supercomputers of a few years ago). We spend a lot of money, time, and effort putting power in the end users' hands. Then we spend a lot of money, time, and effort trying to limit that power to some definition of "right" uses. We are
- Not trying to teach the end user how to use the power wisely.
- Not trying to show the end user how to get around the traps.
- Not trying to give the end user more power to do right things.
- Not really trying to give the user solutions, just things that we can sell as if they were solutions
We can't understand everything the end user wants to do, and we can't predict what would be "safe" or "dangerous" beyond making crude and overly broad walls. (We, as an industry, try to make straitjackets, really, but we fortunately tend to fail to get the user into the straitjackets -- Fortunately, indeed, since success would make us unable to even consider band-aids like suhosin.)
And we (the primary movers of the industry) don't want to believe that end users could really want to use our systems, any more than we want to believe that the end user could understand new and appropriate ways to use our systems.
We don't want to believe that the end users might be smarter than the system designers about what the end user wants to do with the systems.
And yet, it is the only the smart end user that can safely use the system.
Uhm, no, I don't have a happy solution to the problems yet, at least no quick, straightforward patches. The only real solution I can see is not going to be quick, not going to contribute immediately to anyone's bottom line of monetary profit, not going to be considered acceptable to any of the current crop of investors, managers, and accountants.
No comments:
Post a Comment
Courtesy is courteous.