It's 10 o'clock, do you know where your repositories are?

Fascinating article about various package manager repositories not being… let’s say as secure as one might hope.

In this case, a number of large companies had locally hosted repos for in-house use. However upon creating a new package on a public repo of the same name, and just making sure the version number was higher, automatic package updates would pull the code from the public repo instead of the companies in-house ones.

And apparently NPM allow code execution after updates finish? Yikes.

I recall reading a year or two ago about a research - maybe the same guy - who had found a way to obtain Apple source code via some sort of repository finagling. I can’t seem to find the link right now, but I also recall an article that @Syonyk referred to once about how to inject nefarious code into public JS repositories in ways that were absolutely not at all obvious, or even barely detectable. Perhaps that one could be found again too.

In short, I think some large holes are blown in the “open source is more secure” philosophy - I’m not arguing for it or against it here, but just pointing out that in many cases reliance on open source and public code is a grave danger that is difficult to protect against if you still want the benefit of the code.

Npm allows c++ to be built. I think the code enabling that can be arbitrary. On windows I’ve used npm to install the build tools needed to build c++ for other npm packages.
Maven has a similar ability for generating Java code before compiling it. I expect lots of build systems do.
I was aware that projects can be taken over by new (possibly malicious) maintainers.
I think this is less about open source and more about an explosion of dependencies because they are “free”. Relying on more code isn’t free in terms of developer head space, managing dependencies, nor attack surface area. Nor code size in context where bytes used matters.

“Free” and open source go hand in hand, so unless you wish to argue the semantics of the OSF, then you’ve just restated my point.

I think that the open source community as a whole, as a pattern (and that includes the public package repositories, etc) absolutely encourage going in this direction. If you have to write everything yourself, you tend towards using the least amount of dependencies possible, instead of the other way.

I’ve seen tangled code that was proprietary and would have been simpler having not relied on classes, libraries and services which were also proprietary. Those dependencies were “free” because they already existed and were available within the company, but adding the dependencies constrained both sides. This was made worse by less than perfectly clear lines between public interface and implementation details, which seems common in both open source and proprietary code.

Very much. So does a large internal code base, but it’s hard to beat open source for scale here.

Yes, and open source is a huge enabler of that other way, but not the only one.

Edit: I think perceived cost drives individual decisions to write new code vs add a dependency. Being transparently available drives down the cost of finding something that sort of works as a potential dependency. That makes me wonder about ways I’m organising code for discoverability.

I think people confuse “all bugs are shallow” with “more secure” - there’s no incentive for people to fix security issues until they’re known - both in closed or open, but at least closed can pay to assign someone to do it.

It can happen in open source but it seems to be much less likely; especially edge cases like this that are often met with “don’t do that” even though everyone and their brother DOES do it in practice as the alternatives are unreasonable.

Indeed it can! After… really very much more work than ought have been required.

I don’t think it changes much. “Open source is more secure” as an argument more or less asserts that because one can evaluate code from an adversarial perspective, people are evaluating code from that point of view. Sometimes it’s true, sometimes it’s not. I would, however, wager that people think it’s true far more often than it actually is.

Having done red teaming in the past, the vast majority of programmers just don’t think about things from a “How could this be abused?” perspective. It’s not a particularly hard skill to teach (for a while, I was one of the regular instructors for a course on it), and often you can watch people’s eyes light up as they realize that “thinking evil” is both fun and useful. We had an “Evil Thought of the Day” club for a while, just to practice. :wink:

The other, and IMO bigger, problem is that things are often totally fine when they’re designed and implemented. Then, 10 years or 20 years later, they get asked to do things that they were never designed to do, and while they appear to do it on the surface, there are weird things that break. Page tables weren’t designed assuming they’d be holding security-sensitive information - knowing someone else’s page mappings was interesting but gained you nothing useful. Then, ASLR/KASLR came out to block classes of attacks, and it turns out that page tables leak in a wide variety of ways. It just didn’t matter when they were designed. I would assume in this case, the same applies - “split repos” were never part of the design criteria, and that one could have multiple mirrors was a redundancy/availability thing. Start having internal and external repos, and, whoops. You violated an implicit assumption in the design, and it breaks somehow.

This is depressingly common for browser extensions. :confused: Some extension is well done, works as advertised, is popular, and eventually the maintainer either gets bored or someone makes a very compelling offer for it. New maintainer turns it evil, it auto-updates, and, hey, quick buck.

I remain depressed just how many web dependencies any modern “desktop app” (really, Javascript and Electron) pulls in. One or two will, pretty regularly, be broken on 64-bit ARM, too. :frowning:

Well, when you have companies like Red Hat asserting:

As the largest open source company in the world, we believe using an open development model helps create more stable, secure, and innovative technologies.

you tend to wind up with articles like this, or any of a bunch of other mainstream claims that a quick search of “open source more secure” will show you.

This really sets managers (who often don’t know any better) in the mindset of directing their teams whenever possible to use the largest open source projects they can for the same reason that bigshot CTOs buy Oracle. The phrase “nobody ever got fired for buying Oracle” should IMO be a fireable offence itself, yet it’s applied in like form to using pretty much anything in any large scale package repository as justification. It also encourages a managerial tone away from validating code you import (I have been told by so many managers “don’t bother testing that, it’s from <insert project name here>” and forbidden to spend more time on it) and towards blind acceptance. When managers encourage or mandate, the team’s hands are tied no matter what they suspect or are concerned with. And the general sensation of open source being some superior codebase is heavily pushed by Red Hat and others (I know from direct experience) as one of their competitive advantages.

So, you’re right, this sort of problem won’t necessarily change perception, but I do think it’s another piece of ammo in the argument that it’s not inherently more secure simply by being open source. And that alone needs to be said as loudly as possible in as many software shops as possible, as a good first step of many in the long process of actually comprehending one’s dependencies and how they’re obtained.

Absolutely, I totally agree. The thing to keep in mind though is again the pressure (from management, via advertising and heavy marketing) that this is a “best practice” to begin with. I’ve worked with a variety of software shops over the past few decades and the attitudes ranged from “import everything possible” to “don’t import a damn thing - write your own or use one of our validated codebases and only with reason”. Of course in a lot of cases you’re using toolchains that have their own dependencies (a toolchain is, itself, a set of dependencies), etc. and it’s turtles all the way down. But it is depressingly rare to find a shop that has, as a managerial and developmental philosophy “check your reason for including that code” let alone “include as absolutely little code as necessary to do the job”. Golang, for good reason, shied away from package management in the first several versions because the philosophy was “write only what you need” and was based on the idea that even cutting and pasting the specific code you needed was superior to a blind “import x” and letting the package manager slurp it up for you. I believe they changed their mind due to the immense pressure of managers and developers around the world who are utterly convinced that this is wrong. That level of conviction speaks to a serious misunderstanding of the entire concept of dependency – a misunderstanding that has lead to these problems, including “project got too big for its breeches”, which I see as yet another symptom and not a cause.

I’m not sure I’m technically qualified to make much of a distinction either way, but it seems to me like a cause of such problems could be the ‘ignorant middle manager types’ confusing more secure with secure.

For further discussion:

and lest the Linux folks get too high on themselves:

Both of these bugs were disclosed just this year, and both seem to be about the same level- machine access/account is required to exploit the flaw, from there the vulnerabilities could be pretty severe.