Skip to main content

Software Design - An infosec angle

Software design is the important stage where the code is really put to work to deliver or build an business function and application. This is a stage where the SRS (Software Requirements Specifications) is finalized and signed off for design and development.

The major difficult in software design is to incorporate the business requirements as well as do threat modeling to understand the attack surface of the applications. Many applications do not show up problems in regular normal use but show up funny ways of responding when provided with a different input or action not generally considered as part of the application design.
The major areas that a threat model derived needs to address are – The Microsoft STRIDE model provides the following areas to be addressed as part of the design.

1. Spoofing
2. Tampering
3. Repudiation
4. Information Disclosure
5. Denial of Service
6. Elevation of Privilege
7. Integrity of Data

Even though all aspects are not covered most of them are covered in the above threat model. The best way to go about is to break the applications into the different threat vectors and address each one of them.

Each process, data store and elements that are part of the application design are specifically vulnerable to one of the vectors. A matix is prepared addressing each of the vectors’ effect on the application. All elements including web services, people who use the systems and the end points where data is handed over to another application are places of vulnerability. Once we map all the data flows from the application - In the form of a map , we identify areas that are possibly vulnerable to say Information Disclosure, and may have a threat profile associated with it.
The mapping will be very crucial to the way the application is designed. It always helps to think in the same way as a would be attacker does. In this way the model as it evolves will replicate a real threat scenario and the way an attack may be carried out. This is also one of the main reasons to differentiate between the design team and the team that threat models. This removes any bias the developer has for his system and is modeled in such a way that every aspect that can be exploited is tested thoroughly.

Though it is easier to write about threat modeling, it is an evolving science and needs to be modified in line with when it is required, new attack parameters are found and newer vectors are discovered, as they do the threat model that is used for design of the software needs to keep in tow with the evolving threat vectors

Comments

Popular posts from this blog

The Shifting Landscape of Knowledge and the Nobel Prize

Our recent conversation sparked some interesting thoughts about the prestigious Nobel Prize and the distribution of its recipients across the globe. Inspired by a user's search history, we delved into the fascinating patterns and potential implications of who gets recognized for groundbreaking achievements. The initial point of discussion centered on the user's search activity, which revealed an interest in various scientific and technological topics, as well as a specific search for "Nobel Prize winners by country." This led us to explore the geographical distribution of Nobel laureates, and a question arose: why does it seem that certain schools and countries, particularly in the West, have a higher representation? We considered several factors that might contribute to this observation: Historical Dominance in Science: Historically, Western nations have often been at the forefront of scientific research due to earlier investments and established infrastructure. V...

AI/ML Open Source Framework for adoption at an organization

  Data Storage : The first step in any ML pipeline is to store the data that will be used for training and testing. AWS offers various data storage options like Amazon S3, Amazon EFS, and Amazon EBS. Choose the one that best suits your requirements. Data Preprocessing : Data preprocessing is an important step in any ML pipeline. This step includes cleaning, normalizing, and transforming the data to make it suitable for training ML models. You can use open-source libraries like Pandas, NumPy, and Scikit-Learn for data preprocessing. Model Training : The next step is to train your ML models. You can use open-source ML frameworks like TensorFlow, PyTorch, or Apache MXNet for this step. AWS also offers its own ML framework called Amazon SageMaker, which provides a managed platform for training and deploying ML models. Model Evaluation : Once the models are trained, they need to be evaluated to ensure that they are accurate and reliable. You can use open-source libraries like scikit-lea...

HIPAA - What is that we need to know .... Cyberawareness for a Health Care Organization

  Here is a detailed cyber awareness training for HIPAA candidates: Introduction to HIPAA: Provide an overview of the Health Insurance Portability and Accountability Act (HIPAA) and the importance of protecting patient information. Understanding HIPAA regulations: Explain the different regulations under HIPAA, such as the Privacy Rule, Security Rule, and Breach Notification Rule. Identifying and reporting breaches: Teach employees how to identify a potential breach of patient information and the proper procedures for reporting it. Phishing and social engineering: Provide training on how to identify and avoid phishing emails and other social engineering tactics. Passwords and authentication: Teach employees about the importance of strong passwords and multi-factor authentication. Mobile device security: Discuss the risks of using mobile devices to access patient information and the measures employees can take to keep the information secure. Remote access security: Explain the risks ...