The Hacker Mind Podcast: Inside DARPA's Cyber Grand Challenge
DARPA’s Cyber Grand Challenge in 2016 showed the world what's coming -- autonomous adversaries -- and raised serious questions. How can organizations react to something that makes decisions in milliseconds? How can you still have humans in the loop when reaction time is key? And, how can organizations defend or stop something that increases its own cyber capabilities autonomously?
In this episode of The Hacker Mind we go behind the scenes for this first completely autonomous capture-the-flag competition at DEF CON 24 with Team ForAllSecure.
Never miss another episode. Subscribe to The Hacker Mind podcast on:
The Hacker Mind EP 03: Inside DARPA'S Cyber Grand Challenge
Host: Robert Vamosi
Guests: Alex Rebert, Tyler, and Ned
Walker: We’re here today to talk about bringing autonomy to the sport of hackers to capture the flag.
Vamosi: That's Mike Walker, former program manager for DARPA Cyber Grand challenge in 2015 at DEF CON 23 no less Walker announced the first Cyber Grand Challenge would be held the following year in that same ballroom at the Paris Hotel and Casino in Las Vegas, Nevada. Capture the-flag, if you don't already know, is a popular game among hackers. This time it would be played autonomously, entirely by machines.
Walker: We're gonna take this room, knock down those two air walls next year, make it three times as big, install seats, and have a free live event where machines play capture the flag against each other in real time with sports casting visualization. Imagine a gigantic eSports event where all the contestants are machines.
Vamosi: Apart from the eSports spectacle of it all, CGC in 2016 showed the world what's coming: autonomous adversaries -- and raised serious questions. How can organizations react to something that makes decisions in milliseconds? How can you still have humans in the loop when reaction time is key? And, how can organizations defend or stop something that increases its own cyber capabilities autonomously?
Welcome to The Hacker Mind, an original podcast from ForAllSecure. It's about challenging our expectations about the people who hack for a living.
I'm Robert Vamosi and in this episode I'm talking about the rise of security automation, and what we learned in first and only Cyber Grand Challenge in 2016.
The Defense Advanced Research Agency or DARPA periodically hosts these grand challenges that are designed to push technology to the bleeding edge. In its first autonomous vehicle challenge in 2004, no entrant finished the race, but the following year, all but one vehicle did. And by the third challenge in 2007, all the vehicles were successful in navigating a model urban landscape without human interaction. I don’t know about you, but that seems like a tremendous rate of technological progress in a just short amount of time. So, in 2016, the DARPA Cyber Grand Challenge was designed so that cyber reasoning systems or CRSes could autonomously scan, verify, and fix software vulnerabilities, all the while playing attack and defend capture the flag against six other machines. Pretty cool, eh? Machines playing attack and defend CTFs. It’s important to stress that CGC was a significant event in security. Some of the highest gov officials were briefed on this, and this is likely affecting the cyber defense strategy today. So I have to ask, if autonomous vehicles got three challenges, why has there been only one autonomous security challenge? Wouldn’t it be exciting to see what could happen next? Or maybe we learned enough from this one Cyber Grand Challenge?
Ned: When DARPA recognizes that something is coming, they seem to be pretty astute that some technology is coming up maybe then a decade away or a couple decades away.
Vamosi: This is Ned. When Cyber Grand Challenge was first announced he was still a computer science student at Carnegie Mellon University.
Ned: They tried to get the ball rolling on it, so that we can have some kind of competition to create an incentive to have a great system to provide funding for that so I think like the ball is rolling I've seen a lot of excitement in this area. I think that goal was certainly achieved to get some excitement and investment in research in this area. I wanted to work on the project with for all security, basically with the Mayhem team back when I was in school because I was at CMU where that research project started, it must have been, I think, sometime around 2014, I started getting interested in working on Mayhem and then at some point it turned into. There's gonna be this huge competition. And there's gonna be CGC. And whenever it was I heard about that I think I wanted to try to participate,
Tyler: At the time they started the cyber Grand Challenge CMU and Mayhem are already around and leading the way in terms of automated program analysis and exploit generation.
Vamosi: This is Tyler. Prior to working on CGC, he was a researcher in cybersecurity for Carnegie Mellon’s Cylab.
Tyler: As soon as it was announced, everyone was Oh, like, look at this, like, that's basically what they're talking about right. So I don't remember. Where we heard about it but it was, of course, we would have heard about it immediately because it made sense
Vamosi: Tyler's talking about Mayhem, which in 2014 was not yet a commercial product. It was still a proof of concept for autonomous program analysis.
Tyler: It was originally developed at Carnegie Mellon, there's kind of a couple similar iterations. So the first one was based off of CLI, which is a open source symbolic executer. So there were some kind of modifications made to that to exploit generation and stuff like that.
Ned: So, for the competition, I didn't personally work on the Mayhem part but I worked on some of the other stuff that took the output of Mayhem and then turned it into an exploit that could be used. So, and then maybe a terminology thing like Mayhem kind of encompasses the whole thing as well. But the symbolic execution part ties into kind of a whole system.
Vamosi: So what is symbolic execution? And how is it important to software security? It's a way to assign symbols rather than values in order to map out code execution, perhaps Ned can explain it best.
Ned: We're starting with a problem which is we have a binary that's going to run on a computer or some machine and some CPU and some attacker input influences is used by the binary to do something you might have like an image parser that's taking an image from the internet from someone you don't know, and is trying to translate that into something that can be presented on the screen, for example. And so what symbolic execution does is it tries to generate inputs that cover different parts of the code that's trying to handle the input, and there's different ways you can do this so people might be familiar with fuzzing fuzzing is essentially just picking random inputs and sometimes just looking at what this random input covered and kind of trying to analyze something based on that, what may hammer what symbolic execution tries to do is it actually models the CPU. So, it starts with an input and it starts executing, and when it gets to a point where a branch is going to happen, some decision has to be made basically based on the input that's controlled by the attacker symbolic execution means, we tried to actually compute an input that exactly goes one way or the other way, whereas with fuzzing we just try to guess until we figure out how to flip one way or the other, symbolic execution means that we try to compute that from a formula. But you can imagine it’s like randomly guessing quickly for answer vs symbolic execution solving the equation is kind of how it works.
Vamosi: Okay, so DARPA had this idea. Let's see if we could automate vulnerability discovery verification and remediation. To do that, they needed a proof of concept system. So they created their own OS or operating system for the cyber Grand Challenge.
Tyler: Partly to make sure everyone was on level playing field. They developed. Basically a new. Not quite operating system but almost like an operating system that it ran on. And the idea was, because it's a completely novel system, no tools that anyone's developed are going to work out of the box, so everyone is going to have to do a bunch of work to get things working with the DARPA system. So, can't just say like oh well like those guys already had their thing on the shelf nice pull it off, and that's good
Vamosi: DARPA’s operating system was not entirely created out of thin air.
Ned: I'm pretty sure it was very closely related to Linux, I think the main thing is that the competition wanted to kind of select for the quality of the algorithm or the kind of the search that you were doing it wasn't really trying to determine which team had the best integration with all of the kind of random idiosyncrasies of Linux or another OS, really I think they just tried to kind of scope it so that people wouldn't have to spend tons of time with all the menial work that would go into supporting it. I don't want to say not real but just like a popular OS.
Vamosi: So how did Team ForAllSecure plan for this?
Tyler: It was a combination of adapting what we had and then we wrote a lot of new stuff, because one was developed, primarily as a research thing for academic and publishing papers, versus, we had a very specific goal in mind of getting as many bugs or exploits as possible.
Rebert: The technology was x86, which is what we were working on at the time.
Vamosi: This is Alex Rebert, co founder of ForAllSecure and leader of the team ForAllSecure at CGC.
Rebert: They had a crashing component as well which we didn't have at the time that we could develop. That was great timing for us, because it provided the funding to do research on an important product for two years.
Vamosi: Remember CGC is essentially an attack and defend capture the flag competition comprised entirely of machines. So, it’s not surprising that Mike Walker, former project manager at DARPA at the time, was also a former Capture the Flag organizer and player.
Tyler: A player in an ideal world it would have been exactly the same as the CTF, which it wasn't, but it was certainly similar and certainly kind of analogous in a lot of ways, you get to throw traffic and things are happening in real time. All that qualification round in the final round were slightly different, but they all followed a similar pattern where you're given a binary, and then you have to find a way to crash the binary so called proof of vulnerability
Vamosi: Proof of vulnerability is important to understand here. It's not enough that the machine finds a vulnerability, they then have to patch it and share that patch with the other machines
Rebert: For the qualification they dumped, I think it was 25 challenge binaries, which we were supposed to fit into our system. So it was not attack and defense, so I guess, in that sense, you could say it was jeopardy. They had us upload the challenge binaries, as well as send a patch that prevented the binary from cashing on whatever bug we found, and then we were scored on whether we found a bug, whether we were able to protect the binary, and whether we retained functionality and performance of the binary we patched. That's what they call consensus evaluation consensus
Vamosi: Consensus evaluation. This is important, DARPA was checking the quality of the patches that the machines provided. If a machine shares a flawed patch, then they are effectively announcing to the world how they themselves might be vulnerable to attack.
Rebert: Sharing is a nice way of putting it. It was more like if you push something privately, it'd be very very hard to know if the patch is actually secure. Oh, it is security through obscurity, because some other machines didn't know what patch to apply and secure against on that issue. The competition was trying to get where all the patches were public so that means if a machine patched something, and did it badly, the other machines could analyze our patch, and then they would know that we left out something that would allow us to be exploited. It improved the quality of the competition because we knew that other competitors could analyze our patch so we have to do a good job.
Vamosi: So the qualifier was in 2014-2015, the final was in 2016.
Tyler: So there are quite a few changes between the, the first thing in the, in the final event. So, one of the largest changes was in the qualification round. Our goal was only to crash the binary, so you take a binary and it technically seg faults accesses some out of bounds memory or something. That's enough, you've, you've kind of done as much as you need to do for the final round. You had to have a more complicated proof of vulnerability. So, in addition to crashing the binary. You'd need to control. Either the instruction pointer so like demonstrate that you can kind of control the program flow. And you need to control another register, so you need to be able to say, this. Like I can actually take control of it, not just, I can make it do an LD reference or something. So that was a pretty substantial difference. Another difference was for the quality occasion mound. And everyone worked in isolation so our team could operate and everything that we do is kind of on its own, whereas in the final round. If we produce one of these vulnerabilities. You know we have to, it'll get sent to another team, which means that now you kind of have this more game theoretic thing where this other team will have visibility into what you did and be able to analyze it, and likewise we need to have capabilities to analyze other people's exploits. And then you also have to analyze the patches because just because a team made a patch doesn't mean that it works. So there's kind of a lot of stuff with that.
Vamosi: The final was organized into rounds of challenges each their own set of binaries that had to be scanned and patched while the machines both attacked other machines and defended themselves.
Tyler: So, so I think it was, I think the actual competition was like eight or 10 hours or something. So the way that it was structured, which was also a little bit different from the qualification event was similar to a normal Capture the Flag contest. So, contest starts, and there's a certain number of problems available to solve. And those challenges will last for some number of hours and at some point they go away because the time's up or whatever, and new problems will cycle in and old problems will cycle out and and so on. So over the courses. Again, however, however many hours, eight or 10 or whatever it was something like 100 on the order of 100 binary is kind of phasing in and out,
Vamosi: In a CTF The winner is the most points at the end in jeopardy The challenges are arranged by theme with more points to award for the more difficult problems solved in attack defend the scoring is much more difficult, since it includes points for defending your box as well as points for attacking others, the CGC final was designed to be 95 rounds of attack and defend, each with points awarded for different responses.
Rebert: I think it's good to spend a little bit of time to understand CGC because a lot of decisions made during the competition are driven by the scoring system. The game was split into rounds of five minutes, and during each round we would work on the challenge binaries that in play at the time and there were three components to the score for each binary. The first one was good we, how many teams did we score against, so it's like we sent an exploit to the six other teams. How many of them did we exploit? So that was the first component. The second component is did we get exploited by anyone so that's like a defensive aspect. And the third component was, if we patched, how much do we hurt performance and functionality. DARPA would run our patch binaries against a large number of test cases and compute whether the output changed compared to the original binary. If we made the binary more than five percent slower we would start to lose points, so so performance was really important, and functionality was important to a patch. And one other thing that I was planning upon was that patching was pretty expensive. So you had this round, and if you patch on this round, so No, your service was down on the next round, so N plus one, and you would not score any points on that binary. One of the teams decided that and it didn't quite work out as well. They decided to patch everything. And if you look at the scoring system, if you have a challenge binary is not going to get exploited by anyone, somehow, you shouldn't patch it because you could lose points because you have that amount of downtime, and then you lose a point because you might have blocked in functionality, or you might have made it slow. So, what do you what do you want to do, really the optimal strategy, is seeing one round into the future, knowing whether it can be exploited or not.
Vamosi: Okay, this is really important. Naturally, you want to patch your vulnerable box right, otherwise the bad guys will get in, or maybe they won't because the vulnerability isn't that exploitable and your time could be spent elsewhere. So there's a cost for patching everything. Team ForAllSecure, they took the strategy that they wouldn't patch everything, but they thought maybe others would and perhaps waste their time doing so.
Rebert: I don't remember how many binaries were actually exploited, but it wasn't that many. It was less than half. So what we tried to do was use indicators that we could get from the game state to have an idea of whether we think a binary is being exploited, or will be exploited soon. If we find a bug in that binary, you know, two seconds after analyzing it, then we have a pretty good idea that other teams will find it too. And then we would decide to patch, and so that seems like good defense hygiene. For instance, you could see that the game would report to you if a binary cashed, so you would know, okay, my binary cashed, someone found a bag, but I don't know if they were able to exploit it. We thought it was likely that some teams would implement a "patch on cash" strategy, meaning if they see a cash, they're gonna think that it could be exploited, so they're gonna patch. And so we tried to make other teams make those mistakes. So if we if we had a cash on one binary, and we couldn't exploit it, we would still send it out, and then hope that the other team would spend time trying to patch it.
DARPA TV: Live from the Paris Hotel in Las Vegas, Nevada. It's the DARPA Cyber Grand Challenge final event.
Vamosi: So you have to imagine this. Picture a giant Las Vegas ballroom transformed with hundreds of seats all facing an air-gapped stage upon which sits seven large and very colorful boxes with racks of servers - these are the final round Computer Reasoning Systems. When I say Air-gapped, it means they are physically and operationally disconnected from any outside wireless or cable systems so there’s no way for humans to interfere with the autonomous CTF. Occasionally, data is burned to a blueRay disc and then a robotic arm lifts the disc out of the air-gapped area so that DARPA can score each round and pass some of that information out to the teams.
Tyler: So some of the data that they gave us we could see like network traffic for example. And at some point, we were looking at that, before before like the announcers and all that started so you know just what we were kind of sitting there quietly. So we were we were doing that and we said like, Oh, this is weird. like we didn't post any network traffic this round. And, you know, even if we didn't find an exploit for a service, we would at least pretend to send an exploit just to generate network traffic that can confuse other teams, because you know it's just kind of like game theoretic things and this is how you play the CTF so. So it was really strange so we didn't post anything that round normally that doesn't make sense because like we unconditionally always post something, so there's clearly a problem here. And like the next round, you know is the same. And the next round it was the same we're like okay like something has gone very wrong here. So, you know, that was, that was kind of all we could do until the until they started the live announcement stuff, and we had to kind of participate in some of that because you know they were trying to make it interesting so they'd have interviews with the teams.
DARPA TV: Welcome everyone to the first Automated cybersecurity competition.
Vamosi: So you have to understand that the live audience only came into the ballroom at the end, so the last two hours. They watched a TV program that was a compressed playback of the day so far, except the team interviews were all done live. That meant the team captains already knew how their machine were actually performing, even though the TV audience may have been told something else.
Tyler: And so Alex Rebert is our team captain, he was the main person responsible for going up and talking to them. And it was pretty sad because, again, they were, they were, they were playing starting from the beginning. And so we couldn't talk about what was going to happen in the future, because that would kind of ruin the, the vibe, I guess. And so they'd like oh wow like first two rounds may have is doing great, like how do you guys feel you feeling good, it's like, great, wonderful.
DARPA TV: In this next scoreboard seven. We will actually see mayhem overtaking previous to take first place.
DARPA TV: You were battling out with us for a while and now you've actually firmly taken lead with an 8000 point lead.
Rebert: Yeah, we're very excited to start building up a lead from out
DARPA TV: In round 60, so you're currently in the lead with a 10,000 point lead, how you feel. I'm still excited to be in the lead.
Rebert: So DARPA sent us the submissions during the competition and we have been looking at that. And unfortunately our system had a technical difficulty and stopped submitting patches and POVs.
Ned: It was incredibly stressful and upsetting. It's hard for me to know exactly how everyone else was feeling. They seemed more upset than I did but I honestly think that that was just a coping mechanism for me. Like, it was incredibly upsetting to see that something might have been going wrong and that we didn't have enough information to tell why or. It was just such a confusing thing because we had been doing so well in the beginning that. I think if we had lost because of a small error when we fundamentally had the best approach, it would have been just devastating. Yeah, I think there's some video footage of it I think we were all just kind of, totally. I think I think some of the teammates were on the verge of tears.
Tyler: Meanwhile, some of our friends showed up to start watching and they're like, Oh like How's it going, it looks like it's going great. And we're like, no, it's going so bad like our thing shut down and we're so screwed is not not fun to think about so. So we went and we were trying to talk talks to people from DARPA and we're like something's wrong our system is not like that there's a, there seems to be something wrong because we would always post something there's no reason we would ever stop. And aside from this data, the only thing we can do is, you know, the physical machine is there and you can look and see you know the power usage and temperature of things inside so we're like looking at it's like okay well like you know it's still using a bunch of power so it's still, still thinking real hard so something's going on, but you're not allowed to access it it's completely air gapped. there's no, there's no network connectivity to get to it. we asked him if we could like reboot our machine or something, as it should, automatically like reconnect to the network and everything when it comes back, and they said no no intervention whatsoever. Eventually the folks from DARPA were like well, everything looks fine from our end so it's not our fault. So, sorry. I guess I guess your stuffs broken
Vamosi: Here’s Mike Walker responding to the game pause and request by Team ForAllSeCure to reboot their system.
Walker: Disaster recovery only happens if there is a hardware induced or framework induced failure. And the result of Tim's game pause and audit and all his team's work was. There was no hardware induced or framework induced failure mayhem had simply stopped playing the game, and we had to allow it. Under the full autonomy directive to be an autonomous system. And sometimes that means autonomous systems fail.
Tyler: We were pretty, pretty upset because as far as we could tell, up to that point. It was doing pretty well so we could see like network traffic and we could, like, Oh, this looks like it successfully found the next play for this service. So it's pretty good. We didn't have any idea what scores were because the scores involve a bunch of calculations about performance and how fast it's running and the only people who could run it DARPA. So, we saw that and we're like, Well, this was fun two year experiment to end in flames so that's unfortunate. So we brought a bunch of like for all secure and mayhem t shirts or something. So like well I guess we'll hand them out now well it looks like we're doing really well. So we went to pass them out around people who were watching. At that point we just kind of sat there, to have like a little lounge area with like couches for each team to sit around and so we're just kind of sitting there like looking at each other like well what are we doing now. Not that we were doing anything before because we can't touch the system, but,, just like what's like that we were really hoping me to do pretty well here. Y we basically just decided that we must have lost at that point.
Vamosi: Remember how we said there was a cost for patching vulnerabilities, or even sending exploits. There were points or just sitting there, too.
Tyler: So, I don't remember the exact timelines, but a little while into the live portion we started realizing that our score was actually, you know, for the portion where we were working our score was going up real fast. And we started kind of running the numbers in our head saying like, Okay, well, being turned off isn't actually the worst thing that could happen, you know, you still get points if you're turned off just not very many. So, if we get enough points. This might all work out. So we started kind of doing some back of the envelope math and slowly we're like this, this might actually still worked out for us. So we're still pretty. I would say cautiously optimistic, but yeah so then are they keep doing the announcements, and like keep doing the livestream and everything. And then like our friends are like wait I thought you said, it turned off like you still, you're still doing well what's going on, like well i don't know i guess it's just doing okay I'm not really sure what to say and then winner of the DARPA cyber Grand Challenge. So all secure, and they're bought mayhem.
DARPA TV: The winner of the DARPA cyber Grand Challenge. ForAllSecure, and their bot, Mayhem.
Rebert: I think it was a combination was a couple of things. The first one is, we did very well. It'd be in the game. And we got I laid that out of Sioux county so the so the competition. The second thing is kind of an artifact of the scoring system. I don't know if it's because the binaries changed, because, you know, maybe being out of the game, kind of like, could you see a number of exploits to see, but there was not that many exploits being played, which means that being down and not in patching was actually, a pretty good strategy. Because as, as I mentioned, not patching when the binary is not getting exposed is the optimal strategy. And so we scored really well on the functionality and performance of all of the binaries because we had a patch for them. And we didn't score that badly on defense because we weren't being exploited. And so being down was a one reason why. Like the combination of like the scoring system and not seeing our exploits when Mayehm was done down was a big reason why we were able to maintain that lead that we got in the beginning.
Continuous Testing at the Speed of Development.
Find out how ForAllSecure can bring advanced fuzz testing into your development pipelines.
Request Demo Learn More
Tyler: I think we have a couple random theories about what happened. Part of it is,, we had no real motivation to capture a bunch of logs, because, it happens once and after it happens once it doesn't really matter. And there's nothing we can do to fix anything. some of the some of the theories, we had were. It's possible, one of the discs in our primary Database Machine failed, which just made it start going slower. It seemed in general that something was just going slow on our end where we were getting, like, I forget if I think we heard from some of the officials that our system was trying to update information for rounds that had already completed. So you know it was like, round 60 and we were talking about round 58, which, you know, their API would then just completely reject. So for some something was happening where we were getting too much data, and we couldn't ingest it fast enough and then we processed it and we had a backlog and it just got worse and worse. But I don't think we have any, like, again we have some theories for why that may have happened but we don't really have any good good ideas. And I think in the, in the end, towards round. I don't know whatever somewhere towards one of the last rounds. something lucked out where like the the one of the round ones was around long enough that even though we posted about something like 10 rounds old. The challenge was still up. So, you know, we had some kind of like, amazing epic looking thing where it turned back on although it was just still slow probably but, yeah.
Ned: I think it was that there was some issue that after a special amount of time would kind of bog the system down and just we hadn't had a play through, that was that long. So it's kind of an interesting thing because you know it in the real world you roll things out really slowly and you test on real traffic and he kind of. It's not something that I think would have affected the like a real team this is a really normal issue to have and then you would diagnose it and correct it and in a competition setting you don't have that luxury. So, it was just, that's the trade off you make, there's a lot of competitive things in the security fields that I've participated in and it can be really exciting but really stressful.
Vamosi: Having invested two years of your life in a project, I would think there must be some takeaways.
Ned: So, I actually this is a great question because I think there's a huge gap between people who believe in automation and people who believe in the human. And so I was actually, I've always had a goal to be doing vulnerability research. And, real practical research. And what I found was that people on the academic side, kind of scoff at things where humans kind of piecing something together, that's not useful research and then and I understand that. And then people on the other side think oh well this system, you know it's not really 100%, this doesn't work on Chrome. But the truth is, I pulled on a lot of what I learned in academics and in work in automation. And so I actually live now full time basically this, this, using automated tools plus whatever I can do to help the tool succeed. in my day to day work. So I think it's 100% a valid area, I think the more we push it, the more we can move things towards automation. And, as the industry picks up on it, it becomes more popular I think, that'll just kind of happen on its own.
Tyler: I think, for all of us that were working on it was, it was just a big kind of culmination, we've been working on. Just for CGC specifically we've been working on it for two years. And we're a relatively small team. So, like, a lot of the teams had a couple people kind of work on it out and off. And you know we had other stuff we were working on but it was probably like 80% of our time, had been CGC for like two years. So it was just like so much of our lives had been put into, into the event that just having a workout and having us, prove, both to ourselves and to everyone else that like, oh hey like we know what we're doing. We know how to do these things, like, I think that meant a lot to us.
Vamosi: Oh, and I should probably mention that Tyler is a team captain for the plaid parliament of boning or PPP the all time champion DEF CON CTF team with no less than five wins today.
Tyler: Our CTF team PPP, which is, it's a it's a separate entity, but there's just a lot of overlap between, between the two groups. we have kind of a reputation or record of, winning lots of CTF. So, to win. What is basically the largest CTF, you know, whether it's a true CTF is a little bit hard to say but, you know, sometimes it's kind of the biggest CTF you know a $2 million prize pool like a two year event, like all this stuff. That was kind of a big like again like some sort of kind of recognition thing. And like when you're competing against other groups that include like, like defense contractors and like these big companies who are also trying to do the same thing. And you can say like Yes we can, like, we can compete and hold their own against the best of them,
Vamosi: The Cyber Grand Challenge story actually continues when Mayhem came to play the real capture the flag at DEF CON the next day, against the very human Tyler and Ned and the rest of the PPP team ... but that's for another episode. If you haven’t already, be sure to subscribe to this podcast today so you'll never miss an episode.
Until next time, I'm the very human Robert Vamosi, for the hacker mind.
Add Mayhem to Your DevSecOps for Free.
Get a full-featured 30 day free trial.