There is a pressing need for organizations to protect their information systems and networks due to the increasing volume of cybersecurity incidents caused by bad actors and organizations. Ransomware attacks, in which organizational information and data assets are encrypted and held hostage by an attacker in exchange for a ransom, are one type of cybersecurity attack that has wreaked havoc on organizational information systems and networks. To combat this type of attack on critical infrastructure, a plan for dealing with ransomware must be developed, including measures for preventing it, mitigation strategies, and procedures for restoring system services in the event of a successful attack.
A business continuity plan (BCP) must be detailed, and it should include a network diagram of the critical systems, incident response planning, which should include incident reporting procedures, encryption policies, and disaster recovery procedures, among other things. A business continuity plan (BCP) will be robust enough to restore network and information services in a reasonable amount of time after a successful ransomware attack has taken systems offline.
By any measure, Microsoft Corporation is one of the world’s largest generators of computer code, ranking among the top five. While desktop computing is declining and LAMP (Linux Apache MySQL Perl/PHP/Python)-based cloud computing is rising, the company maintains a desktop market share of approximately 90 percent. It runs nearly a third of the servers on the Internet, with a significant number of additional servers running on privately owned or managed networks.
Since the late 1990s, this has provided hackers with an enormous attack surface to exploit by concentrating their efforts on Microsoft-related software. As a result, the company had to pay a significant price in the early 2000s.
Code Red, Nimda, and MyDoom, all caused by bugs in Microsoft software, wreaked havoc on the Internet and cost individuals and businesses millions of dollars in recovery costs.
There are several stages to the Software Development Life Cycle (SDLC) (SDLC)
The Software Development Life Cycle (SDLC) is a systematic and linear process used by the software development industry to design, develop, test, deploy, and retire software. It is one of many models used in the industry to design, develop, test, deploy, and retire software. Using the SDLC is to create high-quality software that meets the end user’s needs. The SDLC can be used for operating system development, application system development, and hardware and software configuration projects. It has also been reported that it is one of the models used for hardware and software configuration projects (Crnkovic & Larsson, 2006).
A project’s SDLC is initiated at the start of the project to document the initial design concept. The project goes through a linear process. It is refined, developed, tested, and then deployed for users before being eventually retired when it has reached a reasonable end-of-life condition.
The SDLC is sometimes referred to as the Waterfall Model because it emphasizes a logical progression of steps like the direction water flows from one phase to the next, which implies that each phase cannot be revisited after completing the previous phase (Sarycheva, 2019). The SDLC and other design models have as their primary goal the production of results that meet the needs of the users (Cohen & Haan, 2010). The SDLC has several variants discussed in greater detail in the section below. The SDLC, while supporting a viable software development process, is rather rigid and not adaptable to a variety of software development situations (Ghahrai, 2018). Even though there are many variations of this model, the most commonly used one has seven discrete phases that include (1) planning; (2) system analysis; (3) system design; (four) development; (five) testing; (six) implementation; and (seven) maintenance. While the SDLC has been described as a seven-phase model, other researchers have found that the SDLC can be described in a five-phase model by grouping a few steps. Each phase of the SDLC model’s seven phases is described in detail below:
The planning phase of the SDLC is the most important phase of the process because it is during this phase that the project’s breadth and scope are defined. During phase one, the scope of the project and the amount of funding available are determined. The fact that the project has the support of the executive sponsor is a critical component. It is necessary to conduct a needs analysis, develop requirements specifications, and develop a project plan. Costs, time, tasks, and available resources are all specified in detail. Phase one will define the scope, duration, and funding for the development project; therefore, careful planning and preparation are required during this phase.2.
In phase two, the requirements are analyzed and fleshed out to provide more specificity. In this step, the user’s requirements are analyzed and used to develop Functional Requirement Documents (FRD). It is the functional requirements that drive the specific design for programs and modules used in the design and development phases and the initial creation of test cases used in the testing phases. In addition to screen prototypes, preliminary data and process flow documents, and other supporting diagrams, the systems analysis phase results in the development of system design specifications.
Context Diagrams, Data Flow Diagrams, Flow Charts, and other diagrammatic tools that will support the system’s overall design are produced during the third phase of system design.
The third phase of system design produces high-level design specifications and document that contains diagrams such as (a) context diagrams, (b) data flow diagrams, flow charts, and other diagrammatic tools that will support the overall design of the system. Furthermore, use cases are developed to aid in the development and testing phases that will take place later on. Additionally, prototypes are developed and approved by users based on use cases.
When functional requirements are translated into a programming language, this is known as the development phase. A systems analyst divides the functional specifications into modules, then distributes them to the development team for coding. This is done before the functional specifications are turned into code. DevOps also includes creating test cases, which are then put through their paces during the testing phase to ensure that the system performs as expected. It is customary for development phases to be among the most time-consuming in the SDLC model, owing to the nature of coding, module testing, the construction of test cases, and compliance with development standards.
While unit testing is commonly performed during the development phase, the testing phase of the SDLC includes several different types of testing. Among these are string testing, in which a series of programs/modules is tested for interaction, system testing. The entire system is tested for functionality and user testing, in which users confirm that the project meets their desired expectations. To validate and verify that the code works as expected, test cases developed during the development phase are used in each test type to ensure that the code behaves as expected. Every change is meticulously documented and can be (a) corrected immediately or (b) noted as a subsequent change that will be completed after the system is put into operation.
During the implementation phase, modules and programs are transferred from a testing environment to a production environment. While the system is being implemented, the final verification and validation of the new code and system are carried out to ensure that the system continues to function as expected. A roll-back strategy is developed before deploying a new feature from a test environment to a production environment. It is necessary to roll back a system after being implemented into a production environment, and the new system has failed to perform as expected. When a new system does not perform as expected, it is necessary to roll back the environment to a previous state to keep the overall information and networking system consistent.
The maintenance phase of the SDLC model is the final phase of the model. Aside from the lengthy duration of the development cycle, the duration of the maintenance cycle is typically the longest as well. When a new system is running in a production environment, and users request changes to ensure that the system adheres to new business requirements, maintenance is called in to help. Apart from maintaining code to support new/updated business requirements, the maintenance phase includes a retirement sub-phase. A system that is currently in production but is being replaced by a newer system is gradually phased out of production (or retired).
Taking Advantage of the SDL to Cyber Security
Even though this pattern is prevalent, it is not unavoidable. There are numerous strong counter-examples, many of which take advantage of the Software Development Lifecycle to their security advantage.
When the Space Shuttle was in development in the 1970s, NASA programmers were confronted with a sobering reality: they were tasked with writing the control code for a flying bomb that would transport seven souls into the harsh realm of outer space a bug could be lethal.
The team responded by developing what is arguably the strictest and most secure software development scheme ever devised in the history of the world. As of 2011, the most recent three versions of the software that had been deployed, each with nearly half a million lines of code, had no more than a single error in them, and they had never crashed or miscalculated.
NASA’s team had a $35 million budget to work with to achieve their goal, but ensuring security does not require large sums of money to be implemented. OpenBSD, a free variant of the Unix system software that serves as the foundation for much of today’s Internet, is developed entirely by volunteer developers with a strict emphasis on security as its guiding principle. Only two remotely exploitable security holes have ever been discovered in the nearly two decades that the operating system has been available for use.
The team achieves this by using a completely open code base and regular cycles of auditing and code review—a critical phase of most SDL spiral models that other software developers often overlook.
Professionals in cyber security have a role to play in the SDL.
However, even though most cybersecurity professionals are not expert programmers, certain niche information security careers necessitate extensive coding knowledge and experience. White hat hackers sift through code, looking for flaws and exploitable weaknesses. Specialists in application security collaborate with software development engineers to produce more secure code. To test software that they are tasked with analyzing or deploying, even ordinary security engineers and analysts frequently employ some level of basic programming knowledge and skills. The following will be taken into account:
- The application’s security requirements are outlined in the use case.
- The number of users who are expected to use the application
- The technologies and programming languages that are being used for the underlying development
- Access to data that will be passed through the application
- A description of the platform on which the application will be installed.
There are a variety of approaches taken by different programmers when working with the SDL. According to some organizations with a waterfall development model, cybersecurity considerations are brought into play during the design and testing phases, with little opportunity to influence the software after it has been deployed. Adopting agile development methodologies in other environments involves security considerations at almost every step in the rapidly iterative coding cycle, with a strong emphasis on the early identification of and correction of vulnerabilities.
Software Development Security Testing is a term used to describe the process of testing software during the development process.
When developing an application, system component, or system service, the software developer must perform numerous attack surface reviews on the code base of the application or system component. Because malicious actors are constantly developing new threats to software, periodic and intentional reviews should be conducted (Force & Initiative, 2013). The purpose of security testing is to assist the developer (or the development team) identify and understanding the weak and evolving weaknesses in application software or operating system software, respectively. The attack surface for software development includes the following elements:
An appreciation by the software developer of the importance of the data used in the software and how the code developed to protect the data, as well as
- This is the sum of all possible paths through which data and commands can enter and exit the software.
- An examination to ensure that the code that protects the previously mentioned data and command paths and the resource that the program connects to is in place.
- In this investigation, we look into the facilities that allow for the authentication and authorization of the code’s execution on the computer system.
- Validation and verification of the data stream to determine whether or not the data can be encoded, decoded, or encrypted/decrypted suitably.
Access auditing, encryption, checksums, data integrity, and integrity checking are all employed (Force & Initiative, 2013).
To conduct software security testing, it is necessary to establish a baseline so that any future changes can be measured and compared to the original baseline. To accomplish this, it is necessary to develop an attack surface map of the battlefield. It is necessary to define the points of entry into the software, such as the use of HTTP headers or user display forms and the use of run-time arguments because these are attack vectors that an attacker can leverage. In addition, aside from the ingress and egress of data and the numerous methods that a program can employ for input, processing, and output, it is beneficial for the developer to ask, “How will the attacker view the code?” which may provide additional insight to the developer in code review for security purposes. Another tool that can assist developers in mitigating security issues in developed code is the use of vulnerability scanning tools such as Abby Scan, AppScan, and many others listed on the OWASP website (OWASP, n.d.). Once an attack surface map has been created, it is necessary to assign a priority ranking to high-risk areas to identify and address them. Security-conscious programming requires the use of a configuration management process, the establishment of a baseline for program security, and the use of an attack surface map to prioritize the risks that must be addressed.
The Capability Maturity Model (CMM) is a framework for assessing the readiness of an organization’s capabilities.
When applied to software development processes, the Capability Maturity Model (CMM), developed by the Software Engineering Institute (SEI) at Carnegie Mellon University, serves as a framework for describing and supporting future refinement of an organization’s software development process (Rouse, 2007). The model has five levels, each used to identify processes that are becoming increasingly more mature and ready for development. For the most part, the CMM establishes a framework for continuous improvement (CI) by defining five levels of software maturity, which are as follows:
- Level 1 – Initially, processes used in development are disorganized, and in some cases, they are completely chaotic. The success of a project is dependent on the efforts of each individual. Because processes are not sufficiently defined and documented, project success is considered non-repeatable.
- Level 2 — Replicable: The fundamentals of project management have been established, and successes have been replicated because the necessary processes have been defined, documented, and documented well.
- Level 3 – Defined: The organization has developed a software process that includes more defined documentation, standardization, and integration that can be repeated in the future.
- Level 4 – An organization monitors and controls processes at the Managed level by collecting data and analyzing the information gathered. Metrics help to ensure repeatability.
- Level 5 -Processes in an organization are continuously improved by monitoring feedback from current processes at the fifth level, called “optimized.” Additional innovative processes are implemented to improve the repeatability of the organization (Rouse, 2007).
The Software Development Life Cycle (SDLC) has several variations that can be used to develop software. These variations arose due to the fluid nature of organizations, which necessitated approaches that were more appropriate to their environment and culture, as opposed to the rigid approach that the traditional SDLC provided. When developing software, it is critical to consider the security of the software under development, which should be a primary concern in any software development process. Finally, understanding the organization’s maturity level, as determined by the SEI CMM, will allow the organization to improve our documentation and processes, which will, in turn, help to strengthen our cybersecurity posture even further.