Tematy w security, o których nie mówi się wystarczająco dużo, czyli element ludzki

Dziś na podcaście gościmy Andrzeja Dyjaka, specjalistę ds. cyberbezpieczeństwa, z którym będziemy rozmawiać głównie o tematach związanych z najnowszymi błędami bezpieczeństwa, ale nie obędzie się bez rozmów o holistycznym podejściu do bezpieczeństwa i tym jakie aspekty psychologiczno-socjologiczne powinniśmy uwzględniać jako community security.

Andrzeja mozecie znaleźć na https://dyjak.me | @andrzejdyjak | https://www.linkedin.com/in/andrzejdyjak/

a nagranie na:

 

 

Jak wygląda praktyczna implementacja DevSecOps

Agile’owe podejście do bezpieczeństwa brzmi wyśmienicie i wielokrotnie spotkałem się z sytuacją, w której ludzie nie wiedzieli jak się za to zabrać. Wobec czego, jakiś czas temu stworzyłem podcast wprowadzający w świat DevOps i DevSecOps, natomiast były to bardzo lekkie materiały, które nie zawierały żadnych sekretów, lecz raczej coś na postać historii tych ruchów.

Wczoraj, opublikowałem półgodzinne nagranie w którym opowiadam już z sensownym poziomem szczegółów o tym jak to wygląda w realnym biznesie, jak do tego podejść, co można za pomocą DevSecOps osiągnąć, jakich narzędzi się używa, jak automatyzować procesy zapewniania bezpieczeństwa i parę innych ciekawych rzeczy, o których nigdzie indziej w polskim internecie nie usłyszycie.

Także polecam, bo wyszło całkiem fajnie!

 

I pierwszy, wprowadzający podcast:

Which skills are essential to find a job in security and how to build an initial portfolio

Question: “What should I do to earn more credibility and which skills specifically should I learn to put myself on a track of becoming a security specialist”

I recommend you do pretty much anything you can, because 1% of exposure is still better than 0. If you’re into webappsec, then go for it, and absolutely play with bug bounty programs and CTFs. You can also gain good credibility by writing a blog where you document your journey and write down the most important takeaways or just share the learning curve with others. Wherever you are at right now, there is always someone who’s behind, and even if you have 1 week of experience, there are people with 0 experience that would benefit from your advice and blogpost. This is hugely underestimated my friend, so don’t shy away from exposing yourself to the world.

That’s what I wrote in my book as well, that we should all dissect goals into smaller tasks, rather than big projects(and use projects just to stay in sync with reality), because when we see progress we’re more eager to keep pushing and make more of such incremental improvements. As creative human beings we tend to focus on the ‘next big thing to disrupt something’, which often makes us end up in stagnation, because we’re overthinking it.
The best advice I can give anyone who wants to achieve something big would be always – Just start doing the smallest things possible, and see what happens next.

When it comes to learning what’s required to be a consultant, I recommend you just check job postings, role’s descriptions in your area, and create a list of most common requirements. This research will allow you to learn which skills are important in your area and it allow you to optimize your learning roadmap, because you’ll learn what’s truly necessary. And once you’ve put a foot in the door, you can go from there and fill the knowledge gaps.
I hope this comment helps, and in case you’re hungry of more knowledge, I created some time ago a podcast where I try to outline how to become a security engineer:

and another piece that could be summarized as “do whatever makes you happy, because there is enough work for anyone”:

TOP 9 Rules To Maximize ROI Of Bug Bounties And Penetration Tests

Originally posted at testarmy.com

Having worked on both sides of the fence, I want to share my biggest lessons learnt during my career that entailed:

  • being a penetration tester and red teamer
  • being an accomplished bug bounty hunter
  • working as an internal QA engineer, Security Engineer and Security Architect a’ka blue teamer
  • running and maintaining bug bounty program for a handful of companies
  • worked as a head of security reporting to the board of directors for maximum ROI of security initiatives, including penetration tests and bug bounties

Here is a list of action items I recommend you to take during and after penetration test/bug bounty cycle. This list is based on the most common gaps I’ve noticed at the companies I’ve worked with and the things that to me made a huge impact ROI-wise.

By implementing these steps, you’ll be able to get much higher return of investment from penetration tests. If you’ve already spend your money, let’s make sure you’ve spent it well and squeezed max out of it.

  1. Provide all information to pentesters as beforehand.

You want to make sure that pentesters don’t waste too much of their time on reconnaissance. Provide them with documentation on the product, video recording how your product works, and a list of APIs and endpoints you want them to test. In general, penetration testers will know what to go after, however if you have hidden APIs, or want to speed up the process, it’s better if you provide them the information beforehand.

2. Don’t play games and don’t block pentesters as they do their job. You’re all in the same team and should share the common goal to make your organization safer

The mindset shift is a huge thing. I’ve met number of security teams, that were literally fighting with external pentesters, because they were afraid of their position and ego. You hire pentesters to do the work for you and to provide maximum value to your business, so it’s not smart to waste your money while you argue with pentesters and block their testing environments.

Depending on corporate culture, it may be a good idea to have a separate – such as compliance/audit – team to coordinate external testing, to avoid conflict of interest.

3. Be humble and thoroughly follow pentesters recommendations in terms of issues remediation

Don’t fight it, when pentesters provide you information about the issue severity and its guidance. It may sometimes happen that with internal business knowledge, you can perform better risk analysis and assess that the issue has different severity for you. However, keep in mind that most often than not, pentesters know what they’re doing and it’s not their first assessment(I hope!) so when they provide you an information about the issue, it’s likely to be legitimate. It’s generally good idea to follow their remediation guidance, to make sure you’ve addressed the issue in-depth and then request re-tests.

4. Find the root cause of every single issue and learn what you can do better next time.

Stay focused on the big picture and think about other places where the same issue may exist but wasn’t found by pentesters. As opposed to simply fixing individual bugs, dig deeper into your codebase to find out if the same issue doesn’t happen in other applications. Then review your software engineering practices to see what could be done better to stop those bugs from appearing in the first place.

5. Have your engineering teams learn from pentesters and study pentest report

Don’t keep it all for yourself and have everyone learn from the report. Use it as an interesting exercise and learning experience. Pentests at most companies happen once or twice per year, so why not maximize the outcome. Thanks to that software and QA engineers can learn how to apply that knowledge in their day to day work such as to add it to existing test cases.

6. Ensure developers have complete understanding of all issues

You need to take ownership over the reports and don’t just throw it at employees. You want to make sure everyone has good grasp of security awareness, so they can address them well as well as use that knowledge in the future to build more robust apps. If something is complicated it won’t get easily remembered and put into action.

7. Cover identified bugs with solid regression test cases

You don’t want pentesters to find the same bugs over and over again. Not only it’s a waste of money, but that’s an additional exposure, because if regression pops up, attackers may abuse it before your next pentesting engagement.

We’re performing penetration tests to improve our products and company, so if you’re not going that extra mile, you’re leaving a lot on the table.

8. Once in a while check changes made by pentesters, visit their profiles used for testing to catch unexpected regressions.

It makes a lot of sense to clone the activities that were performed by pentesters, and build standalone testing scenarios out of it. When you login to the pentesters’ account in the testing environment, you may notice much more bugs than they reported. Security testers, focus on reporting security issues, and most often don’t waste their time on reporting UI/UX issues, that may have been identified during security tests. So just checking their environment to see if e.g. not expected encoding didn’t break the UI, will be beneficial, and again will be something to share with internal QAs so they can improve their test cases.

9. Review logs to catch unidentified bugs

Some things aren’t exposed to the user, which means that security testers may have touched some fragile system and maybe had even broken it, but they weren’t aware of it. If you review the logs, you may be able to find some unhandled exceptions, Internal Server Errors, and other indicators that may allow you to improve robustness of your code and systems.

Thanks to making yourself comfortable with the logs generated by penetration testers, you’ll be also able to create security monitoring around your access/application logs, to detect potentially malicious hacking attempts. If some keywords happened to appear in logs only during pentesting engagement and never before, then attackers may perform tests in the same fashion. If you create alerting around that, you may be able to spot the attackers and allow yourself to respond to the threat.

In my career I’ve met only a handful of companies that taken such wide range of activities during penetration tests. If you follow this guidance, you’ll optimize your spendings and should be more satisfied with general return of investment in security tests.

Here Is What We Should Teach All Software Developers About Security

I’ve received this question a couple of weeks ago and I believe it’s valuable enough to spread my thoughts on the subject here as well.

Having been a university lecturer myself I truly believe there is much more we could be doing. It doesn’t mean we need to push a lot of new knowledge on students, it’s just enough if we share with them the principles.

Educators need to make it their goal to try to teach software engineers anything about security because 1% is better than none. Instructors should strive to make lessons relevant and engaging while making it as simple as possible. You want to show software developers that application security isn’t really as hard as it’s portrayed. If we lower the entrance bar and we help them understand where they can find high quality knowledge, then they’ll be more eager to learn about the subject.

The common problem we see among software engineers coming from a variety of background isn’t a complexity of security, but that they just haven’t been made  aware of the need for application security.

We certainly don’t want to overwhelm software developers with loads of new knowledge because they already have a lot to learn in their own specialisations. A good start would be to outline the basic security principles. These are fundamental principles that would – hopefully – change their naive mindset and prepare their software for facing the real, dangerous world.

Here are some of those key security principles: (paraphrased for brevity)

 

  • Minimize attack surface area

Every feature that is added to an application adds a certain amount of risk to the overall application. The aim for secure development is to reduce the overall risk by reducing the attack surface area, so think twice before you write that next feature/functionality and expand your code. More code = more places where mistakes could’ve been done.

  • Establish secure defaults

There are many ways to deliver an “out of the box” experience for users. However, by default, the experience should be secure, and it should be up to the user to reduce their security – if they wish to and if it’s allowed per design.

  • Principle of least privilege

The principle of least privilege recommends that accounts have the least amount of privilege(s) required to perform their business processes. This encompasses user rights, resource permissions such as CPU limits, memory, network, and file system permissions.

  • Principle of Defense in-depth

The principle of defense in-depth suggests that where one control would be reasonable, more controls that approach risks in different fashions are better. Controls, when used in-depth, can make severe vulnerabilities extraordinarily difficult to exploit and thus unlikely to occur.

With secure coding, this may take the form of tier-based validation, centralized auditing controls, and requiring users to be logged on all pages.

  • Fail securely

Applications regularly fail to process transactions for many reasons. How they fail can determine if an application is secure or not. So when an application fails or throws an exception, it should default to the lowest privileges and accesses possible.

  • Don’t trust services

Many organizations utilize the processing capabilities of third party partners, who will more than likely have different security policies and posture than you. It is unlikely that you can influence or control any external third-party, whether they are home users or major suppliers or partners.

Therefore, implicit trust of externally run systems is not warranted. All external systems should be treated in a similar fashion.

  • Separation of duties

The key to fraud control is the separation of duties. For example, someone who requests a computer cannot also sign for it, nor should they directly receive the computer. This prevents the user from requesting many computers and claiming they never arrived.

Certain roles have different levels of trust than normal users. In particular, administrators are different to normal users. In general, administrators should not be users of the application.

  • Avoid relying exclusively on security by obscurity

Security through obscurity is a weak security control and nearly always fails when it is the only control. This is not to say that keeping secrets is a bad idea, it simply means that the security of key systems should not be reliant upon keeping details hidden. The security should rely upon many other factors, including reasonable password policies, defense in depth, business transaction limits, solid network architecture, and fraud and audit controls.

  • Keep security simple

Attack surface area and simplicity go hand in hand. Certain software engineering fads prefer overly complex approaches to what would otherwise be relatively straightforward and simple code.

Developers should avoid the use of double negatives and complex architectures when a simpler approach would be faster and simpler.

  • Fix security issues correctly

Once a security issue has been identified, it is important to develop a test for it and to understand the root cause of the issue. When design patterns are used, it is likely that the security issue is widespread amongst all code bases, so developing the right fix without introducing regressions is essential.

 

It would make a huge difference if we educated software engineers and made them aware of those risks. It’s important because even without going deep into details and technical specifics, we can shift their mindset one it at a time.

We teach software engineers — and for a good reason —  how to be good citizens and how to build, which after many years compound and create a builder mindset. Security assurance on the other hand is mostly about breaking and finding holes in a “perfect” creature.  Builders generally don’t think about the bad stuff, because they’re always striving for better, prettier and something that’s more functional. Sharing with them the basics and showing what can possibly go wrong doesn’t cost much, yet is a great starter.


And then if we want to go deeper, which we should — we should teach them about security issues classified in OWASP TOP 10, and train them into performing basic security testing. This is to embrace the breaker mindset and to widen their skillset and horizons.
Although OWASP produces educational content mostly for web and mobile applications, it can also enrich the mindset of desktop apps developers and other people working with software development. However, we can’t just throw those resources at everyone. To be effective, we must provide relevant training, so students don’t feel like they’re pushed to learn something they won’t ever get to apply in their careers.

 

Once we’re done with face to face trainings, we should provide them with resources they can use on their own. Pointing them to books, websites and courses which they can use to become more security-savvy is really helpful and decreases the discomfort that comes with entering a new field of study.
As for Web developers, we should point them to content such as:

  • OWASP TOP 10 – A list of most critical web application security risks
  • OWASP Application Security Verification Standard – A quick checklist that can be used for ad hoc purposes to verify compliance with security standards
  • OWASP Testing Guide – Extensive guide into web application security testing
  • OWASP Security Code Review – Detailed book on how to perform whitebox code reviews to identify security bugs

 

Depending on how much time and resources we can and are allowed spend on it, we should go beyond and provide students with access to variety of other resources. Think of books explaining the security concepts behind the components they’ll use on daily basis, whether as a user or while integrating it into their software.
Let “The Tangled Web” book be an example of an excellent choice to better understand web browsers and frontend’s security. 

Whatever you decide to choose, make sure you’re actions will inspire them to continued education. In my experience, the best way to learn is by applying theory to real-life problems and putting freshly possessed to practice. Human beings learn by making and breaking, and everything single mistake can be converted into positive learning experience, thus making them better developers.

If we don’t focus on addressing the issue at the core(formal education), we’ll have hard time keeping current pace of innovation, because we’re being constantly derailed by problems we try to fix with ineffective, myopic mindset. So keep exploring, keep testing and keep moving this field forward.

Good luck, ’cause you’ll need it. We all do!