Skip to main content

Secure SDLC - Security Verification - A needed process

Security Verification is a process through which Code can be analyzed. However, as prerequisite it needs to be addressed with due consideration of What the application is and the business operation that it supports. The main reason for this is that it would be very difficult to prioritize and address the weaknesses. The threat modeling is an important tool and along with the threat model a security review would be an indispensable tool in identifying the root cause of vulnerabilities – CODE.

Based on the prioritized functions and possible attack vectors – For example - Protocol Errors may be a potential area for Input validation problems. Based on the preliminary build, it is possible for a preliminary scan of the code base. This should provide a base input that can again be cleaned up to remove unwanted areas to concentrate the efforts and to move to areas of potential weaknesses. The easiest way to achieve this is to use a Static Code Analyzer, a lot of tools are available both open source and proprietary that can look for common coding errors, design flaws or other areas that we can configure the tool to achieve.

One of the important use of the tool is to find issues with the process. If we find areas of concern specific to a vulnerability we can check it against the process followed. It is immaterial what language is used but provides a major input as to the nature of the code base.

The static analysis tools major input in addition to the coding weaknesses is that it drills down to the area of weaknesses, whether the problem is at the design stage or at the earlier stage. If a vulnerability is found – for example, an update is automatically pushed without verifying the source, this can considered a major weakness that is not in the code but in the specifications , which did not specify that the update needs to be taken from a source that is trusted or from a data source that is signed.

The metrics generated by such tool is also an valuable input as it is useful in tracking the performance over time. Time series analysis provides inputs as to the number of vulnerabilities to lines of code and this ratio is important to understand the maturity of the coding process.

Comments

Popular posts from this blog

HIPAA - What is that we need to know .... Cyberawareness for a Health Care Organization

  Here is a detailed cyber awareness training for HIPAA candidates: Introduction to HIPAA: Provide an overview of the Health Insurance Portability and Accountability Act (HIPAA) and the importance of protecting patient information. Understanding HIPAA regulations: Explain the different regulations under HIPAA, such as the Privacy Rule, Security Rule, and Breach Notification Rule. Identifying and reporting breaches: Teach employees how to identify a potential breach of patient information and the proper procedures for reporting it. Phishing and social engineering: Provide training on how to identify and avoid phishing emails and other social engineering tactics. Passwords and authentication: Teach employees about the importance of strong passwords and multi-factor authentication. Mobile device security: Discuss the risks of using mobile devices to access patient information and the measures employees can take to keep the information secure. Remote access security: Explain the risks ...

AI/ML Open Source Framework for adoption at an organization

  Data Storage : The first step in any ML pipeline is to store the data that will be used for training and testing. AWS offers various data storage options like Amazon S3, Amazon EFS, and Amazon EBS. Choose the one that best suits your requirements. Data Preprocessing : Data preprocessing is an important step in any ML pipeline. This step includes cleaning, normalizing, and transforming the data to make it suitable for training ML models. You can use open-source libraries like Pandas, NumPy, and Scikit-Learn for data preprocessing. Model Training : The next step is to train your ML models. You can use open-source ML frameworks like TensorFlow, PyTorch, or Apache MXNet for this step. AWS also offers its own ML framework called Amazon SageMaker, which provides a managed platform for training and deploying ML models. Model Evaluation : Once the models are trained, they need to be evaluated to ensure that they are accurate and reliable. You can use open-source libraries like scikit-lea...

The Shifting Landscape of Knowledge and the Nobel Prize

Our recent conversation sparked some interesting thoughts about the prestigious Nobel Prize and the distribution of its recipients across the globe. Inspired by a user's search history, we delved into the fascinating patterns and potential implications of who gets recognized for groundbreaking achievements. The initial point of discussion centered on the user's search activity, which revealed an interest in various scientific and technological topics, as well as a specific search for "Nobel Prize winners by country." This led us to explore the geographical distribution of Nobel laureates, and a question arose: why does it seem that certain schools and countries, particularly in the West, have a higher representation? We considered several factors that might contribute to this observation: Historical Dominance in Science: Historically, Western nations have often been at the forefront of scientific research due to earlier investments and established infrastructure. V...