Retrieved from https://studentshare.org/information-technology/1460370-week9-1dq
https://studentshare.org/information-technology/1460370-week9-1dq.
So it stands to reason that the more complex the system, the more complex the software and probably the more often it requires updates. As Professor Williams points out in his paper (Williams 2009), this is especially true for a system that has been operating for some time (late in its life cycle), even though may be working quite well. “When an architectural change causes the interactions to become more complex, the architecture is degenerating”. So the developer must make the updates carefully, for faults and bugs are much more common in older hardware, much like major surgery is risky in older animals.
Like it is doubtful Windows ’95 would work for today’s computing world, especially the Internet, complex systems from that era would have had to change in order to accept the ever increasing demands placed on them from the users. Manny Lehman developed a set of rules for software evolution. His Law II states in a nutshell that the more changes the system becomes ever more complex and the danger of system instability is increased (Ibid). Most software manufacturers consider the maintenance or update process the most expensive part of inventing and producing the product.
So it is no wonder that there is lot of emphasis on what is known today as the software process. Acuna (2000) discusses four basic parts that every process has and from there a predictable model can be shown to foretell future changes. They are agent (not necessarily a human, maybe a sub process inside the software), role (rights and responsibilities assigned to the agent), and activity (the part of the process where changes are made, normally by a human). Finally there is the artifact (the raw product of the software from which future changes can be developed).
Since the artifact phase is where most changes in the process take place the developer can accurately predict future changes from like software processes. When this is model is used, resiliency to change is greater and the likelihood of bugs is decreased immensely. Emerging Trends in Software Development: Cloud Computing In 1995 technology was great. You bought a computer with a huge bulky monitor and an early Intel processor. But as usual for new technology things could only be improved. The processor got faster and the software expanded.
But the best thing was the cable slowly went away. First there were DSL and Cable and finally came wireless, Wi-Fi and Bluetooth. Your laptop can go anywhere with an air card or hot spots. Even the cell phone is now a smart phone, also capable of keep one connected. But what was the default desktop on that old Windows environment? Clouds. The concept for what is now known as cloud computing was visualized as far back at the 1960’s (Cantu 2011). Like an electric utility, cloud computing is shared devices and peripherals from one central source, so that certain applications can draw those services as needed.
A good example of this would be smart phone technology where the phone only draws resources as needed from central servers at the service provider. Several advantages can be realized by using cloud computing, including the versatility of being able to access current data from anywhere, such as the salesman at the presentation meeting across the country. The internet is the preferred method of accessing cloud so any browser capable device can suffice. Costs are kept down, as businesses aren’t required to purchase and operate expensive server.
Disaster contingency is also a factor, especially if the data is redundant across several servers. Users also have access to the most up to date applications, as they are updated from one central location. It has been mainly a business oriented
...Download file to see next pages Read More