Skip to main content

Human Factor in Protecting Information Systems

Many a times we talk about vulnerabilities, exploitations and other specific issues that cause an would be attacker to be successful in breaking into systems. But looking at the different varieties of attacks and other specific payloads used for the break in at the end of the day, it has always been a brute force or a social engineering attack that benefits the maximum, at least from the attackers point of view.
All the recent XSS attacks like the one against Apache, JIRA are all because of the fundamental weakness - Humans - As long as we have people handling certain activities and ways of doing things, these kind of  breaks would continue and go on for a long time. Many say two factor solves the issue, but have in my experience found so many two factor fobs having their  primary username password scribbled or hanging on the fob. How useful that is.

The purpose of the two factor can be easily defeated and again it is the indefatigable (wow where does that word come from - may be it exists) user.  Many a times, it is desirable for the builds to be automated with a set of scripts to ensure there is consistency and human errors do not creep in. The other important aspect to be taught to the administrators is use a different set of passwords and usernames for your application (web app or any other app) and the operating system. We have also seen many a times, that a
easily crackable user name and password combination exists in the web application and has a corresponding account in the operating system with sometime sudo permissions. The simple idea is to separate the OS from the overlying application layer. Segregation of users may not be possible but at least an option to segregate the different components with its own set of passwords may mitigate the issue to a certain extent. A simple paradigm would be to ensure that we have consistent roll outs with minimal human interventions, though this may not be possible when it comes to a small set up where the cost of automation is not warranted. Even in such instances, it is always recommended to go in for a standard build. Build a maker checker functionality, the human fallibility when it comes to checking your own work is phenomenal. It is as good as absurd and always would be gaping holes when it comes to checking what you have done. Self audit is fine, but humans have a tendency to go about and ignoring something, because of the fact, that they are confident about what they did and the fact that it was completed by them, gives them that false sense of security. Always have another person to run through the builds, use a tool to scan and provide you with basic inputs. A tool is only as good as how well it is configured. Ensure that you use the tool properly. And presto many a problem may vanish.

Comments

Popular posts from this blog

Malware Damage - It is real and you need to be ready ...

  Malware, short for "malicious software," is any software intentionally designed to cause harm to computer systems, networks, or devices. Malware can take many forms, including viruses, trojan horses, worms, ransomware, spyware, and adware, among others. The dangers of malware are numerous, and it is crucial to protect yourself from malware to avoid serious consequences, such as: Data theft: Malware can be designed to steal personal information, such as bank account details, social security numbers, and login credentials. Once this information is stolen, it can be used for identity theft, financial fraud, and other malicious activities. System damage: Some malware can damage your computer system, causing it to crash or malfunction. This can result in lost data, system downtime, and costly repairs. Financial loss: Malware can also be used to extort money from victims. For example, ransomware can lock down a victim's computer and demand payment in exchange for the decrypti...

CIO Questions answered - Your comments welcome - The reflections of the inner self

Briefly describe the typical size and organization of an IT team that you have managed. Include the division of responsibilities, how you track progress, etc. My experience ranges from me working independently, mainly to maintain my independence when I perform audits to managing a team of more than 25 - 100 consultants in various roles. I have handled multiple projects simultaneously where we have multiple consultants (typically from five to ten) working on multiple projects. We have used a set of tools to specifically monitor progress as well as the milestones. The projects involved were simple roll out of products (Microsoft Active Directory Domain Builds, Log Consolidation, Vulnerability Management) in line with product specifications to complex integration of systems that involves building multiple SoA interfaces for healthcare applications. For a successful project there needs to be proper delegation, personally I believe a person can deliver if he is not micromanaged, identifyi...

A Roadmap to move from Cloud to In premise - The reverse migration -- Is Cloudflation at myth?

 Cloudflation as a term is being used and talks on the spiralling cost of cloud bills for an organization. The easily available and provisioning options leads to workloads that run for no reasons, orphaned accounts and a gamut of costs that are accrued by multiple departments without much of an oversight. There is a trend to reduce the workloads and have a semblance of order. It is imperative that we have plans that helps to reverse the migration to cloud and bring back some of the workloads that might work cheaper to run local. Is there a roadmap to achieve this goal? A few pointers in this direction A roadmap for moving from the cloud to in-premise computing should include the following steps: Assess current workloads: Assess the current workloads that are running on the cloud and determine which workloads would be most suitable for in-premise computing. Identify in-premise infrastructure: Identify the in-premise infrastructure that will be needed to run the identified workloads....