StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

3D Silicon Chips in Computer Architecture - Essay Example

Cite this document
Summary
The paper "3D Silicon Chips in Computer Architecture" states that enabling software developers and programmers to quickly find bugs, recognize operational blockages and protect their code against attacks should be the primary goal of system designers at every level…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER92% of users find it useful
3D Silicon Chips in Computer Architecture
Read Text Preview

Extract of sample "3D Silicon Chips in Computer Architecture"

3D Silicon Chips in Computer Architecture Over time, the quantity of transistors on a chip rises massively. However, the efficiency that should be from such structures has not kept up with the technological progress. Contemporary processors are very complicated, and software developers need special development tools. These include memory leak identifiers, security profiles, and dynamic type analysis and data flight recorders. Most of these systems require full-system data that intermediates several interacting threads, processors, and processes (Billah 25). To make the next generation of applications easier to use and with fewer complications, specialized hardware need to be added to help in introspection and profiling. While additional hardware would be helpful to manufacturers, the cost of every die will increase. A novel way to conquer the problem is ensuring that the equipment used for specialized analysis is on active layers that are vertically inserted into the processor die. The benefit of this is that it would provide a modular functionality that needs to be incorporated into developer systems to keep costs down (Zhang 5). There are many advantages of utilizing inter-die visas for analysis. There are quantifiable impacts for implementing this in relation to temperature, range, energy and routability in the subsequent systems. Hardware stumps may be into product workstations during planning, and this would allow for analysis to get fused to expansion chips. These stubs have the ability to raise the power and area by at least 0.9% and 0.021mm2 (Sander 25). It is not easy to develop software of high quality for modern computer systems. Applications that are performance critical can perform billions of calculations and work in a composite environment with numerous run-time mechanisms. These are progressively accountable for handling numerous architectural resources that include hardware and power threats. To deal with this complication, designers depend more on complicated tools for software examination. Mixed static-dynamic analysis can be done on software using binary instruments; the magnitude of analysis that can be during tests is constrained by the performance to be tolerated. This is critical for the long-running as well as interactive programs (Power 18-20). To allow run-time examination with minimal costs, investigators have suggested the expansion of dedicated on-chip hardware units. These can help software designers in creating bug-free, more efficient and more secure software. For example, processors may have the added function of inserting instructions directly into the execution stream. These help in the profiling a program for buffer exploits and performance. Modules for analysis may be used to discover performance bottlenecks. Monitors for hardware performance should study the activities of the branch unit and the cache (Sokolovskij 42). Also, replay boxes may be installed to enable tracking down bugs that are difficult to reproduce. A horde of other devices has been suggested in the study works. The fact available at the hardware level makes it normal to complement new runtime functionality study. The presence of specialized chip equipment goes against marketing and cost restrictions of those who make consumer microchip systems. Examination modules need a large area, and they bring about interconnection jamming since they need signs from various portions of the chip. Consequently, computer developers hesitate to augment anything except the modest of modules. The additional cost reduces the proceeds. Such modules should be imitated on every processor. It should not matter if the end-buyer uses them or not (Power 21-23). For instance, hardware monitors such as performance tabulators are present on nearly every high-end processor in the market nowadays. These controls are built into the architecture description together with the design that is into each die that is thoroughly tested and verified. For all the work was done on these systems, almost all users that purchase a machine never think about or use it. Processors are incorporated for nearly all the advantage of saleable software designers. They use the counters to optimize and tune the production code. Although HPMs may be worthwhile because of their tiny dimensions, any dedicated hardware support for designers will remain unused because numerous consumers do not create the critical system. Instead, they execute it (Onagi 15). This does not imply that increasing developer functionality is useless. Rather, extra hardware is only useful in selecting a marginal number of users who create serious code for everyone to use. Using high-end software examination creates the problem of permitting these methods with the tiniest effect on end-user structures. Experts have discovered a novel technique by which the study of functionality can be in a computer. In particular, experts suggest a novel and segmental way of improving examination hardware to the processor of the next generation. Several 3D technologies like those that incorporate inter-die vias are under evaluation in the industry as a method of piling together numerous chips (Durán 1508). Some possible uses involve the piling of DRAM or larger cache right onto the die processor to ease the pressure on memory. It also includes the design of piled chips of numerous processors. The principal suggestion is that two silicon sections are fused together to create one chip. Two sections of the silicon are interconnected using inter-die vias or posts that run among them. The capability to interconnect many layers implies that considering adding a layer to the processor to enable access to the most vital signs of the structure. A processor with such capability may be vended to designers while product systems would not have this additional layer of analysis (Durán 1509-1513). Software-only arrangements are extremely popular due to their flexible hardware individuality. Besides, they also need system level support that perturbs the tested software systems regardless of how they are. While engineers keep on reducing software sketching expenses through shrewd sampling and switching, designers always require heftier methods of dynamic study rather than software only. The most modern machines utilize some form of the performance counter. Many experts support this idea (Onagi 49). Despite the fact that these counters are invaluable in quantifying a machine’s performance, it is hard to help with more sophisticated methods of analysis due to the absence of elasticity to profile events at the application-level. These require significant software handling to mine valuable information. Nevertheless, if general purpose hardware for profiling were included, it could be used to analyze directly and instrument an executing program regardless of the software layers used. Numerous researchers have suggested the idea of incorporating these processors on the chip. While these methods offer an efficient technique of handling data seized at the CPU level, hardware developers have been sluggish at including such devices for several reasons (Durán 1514). One of the primary advantages of creating monitoring device imprinted on top of the key processor is to create an intersect jamming that is significant. As seen in recent research, the problems facing the monitoring of performance and collecting data from the entire chip for consolidated analysis is enormous. Implementing global interconnect results in serious challenges. This interconnects will have to surpass every strategy bottleneck and devour many top layers of metals. The interconnect needs to connect these dissimilar areas of the processors. Besides, it will have to operate at an extremely great speed. For instance, getting the location of every loaded instruction might consume approximately 64 Gbps of bandwidth. At this amount of data combined with the long distance necessary, demands wire cushioning as well as pipeline latches (Chen 78). Also, this requires that the silicon remains kept in many diverse blocks so that wires contact the necessary transistors. In a complete custom design, there is a demand for a substantial quantity of manufacturing that spreads over the entire architectural and physical design. Numerous businesses are hesitant to increase the complication of these additional designs (Boeuf 64). Rather than being coerced to direct performance data using other parts, the inter-die vias can transfer data out of a plane to a particularly constructed layer. Space should be for the gates that control the posts as well as switch these large metal pieces that will require some amount of power. Although there could be some unnecessary expenses, the areas designated for the posts are contained in the station of the tap. Also, no additional organization is needed between the developers of the diverse blocks. The wires may be shorter, and the power expense is condensed compared to on-die routing (Zhang 98). The second advantage of 3D integration is that it offers a method to lessen the total cost for the end user. Costs associated with an integrated hardware monitor should be paid for by every second user although many of them will never need such functionality. There are approximately 225 million personal computers in the United States. This is similar to three computers for every four people. However, there are only 700,000 programs. Even if each application demands a system with hardware support for debugging, the market for these devices would be in a magnitude that is less than commodity personal computers. The central processor has the potential to increase the number of monitors on a small set of devices without affecting the cost of the main processor. Researchers advocate for the selling one type of processor that is through connections for hardware monitoring (Kim 36). The dissimilarity between the system being sold to the consumers and the one sold by developers is whether the hardware monitors are on top or not. Therefore, there is the need to deliberate on two costs. The first is the cost of the system developer that has hardware monitoring and analysis stacked on it. Under this cost, is included the cost of fabricating systems that use 3D technology. These require the mounting of an analysis engine. The temperature effects may need the use of more costly heat sinking technology as well as the monitoring of the layers needed to be tested and fabricated (Billah 78). It is hard to approximate the extra fabrication cost. Many experts are supporting the adoption of 3D IC technology for the purposes of performance. The cost of adding another layer will be the lower particularly if one layer of analysis may be for multiple families of the chip. Moreover, the extra costs of fabrication will generate heat by both driving the posts as well as the active monitor layer. This will effect on the cooling costs. The second cost is that of a consumer system plus the hardware control kept off. In case, an average consumer intends to purchase a system without an attached analysis engine. There is the need to measure the extra expense of making the central processor monitor compatible. The additional cost results from the area taken by the circuit that controls the post as the vertical column of the vias needed to connect the area where the post would go (Chen 47). The ultimate benefit of placing a hardware monitor on the key processor is the prospect it has to create new avenues of study into heavy-duty dynamic program investigation. Modern run-time systems are by the analysis of the overhead as well as the limited bandwidth available. Many examples of such analysis already exist. One system known as the Mondrian Memory Protection expands on the idea by protecting memory to incorporate protection on randomly tiny memory ranges that have write, read and execute permissions. This security system has been known to identify the numerous types of bugs in software through emulation. The uses of unsafe pointer dereference analysis like fat tips or dangerous memory region tracking can study the code that is by network-based attacks such as worms (Durán 1513). Tracking the flow of data using the architecture can identify the doubtful usage of data. This enables the identification of worms in the wild as well as data flight recordings. They allow for the playback of the state of the architecture when bugs are or attack. These methods of analysis offer a powerful tool although they can slow down the system by between ten and ten thousand times. Many scholars have suggested the usage of comprehensive profile information so as to enable informed decisions as well as optimization decisions. Many power controlling methods can be by a picture of greater clarity of what the application does and the method it interacts with the system (Kim 46). Despite the fact that these analysis systems have a high potential a commercially sustainable method to hurry them is required. Since the observer is from the core processor, the quantity of power and area that can be assigned to the analysis is massively amplified. Enabling software developers and programmers to quickly find bugs, recognize operational blockages and protect their code against attacks should be the primary goal of system designers at every level. One way in which design could assist in solving these glitches is through the construction of machines that maintain dynamic methods of analysis with least interfering on the software. Scholars suggest that hardware maintenance that help in attaining these goals should start from the usual end-user system. Implementing a supplementary analysis engine able to perform all the necessary dynamic analysis as well as piling the analysis engine on top of the primary processor using 3D IC technology is a plausible solution (Sander 79). The greatest benefit of applying this method is that the cost of particular analysis hardware is from the high-cost sensitivity in the consumer market. Therefore, users can still purchase their cheap, high-performance machines like the only additional hardware they are paying for only cover for the stubs. Works Cited Billah, Muhammad Rodlin, et al. "Multi-Chip Integration of Lasers and Silicon Photonics by Photonic Wire Bonding." CLEO: Science and Innovations. Optical Society of America, 2015. Boeuf, Frederic, et al. "Recent Progress in Silicon Photonics R&D and Manufacturing on 300mm Wafer Platform." Optical Fiber Communication Conference. Optical Society of America, 2015. Chen, Peiyu, and Aydin Babakhani. "A 30GHz impulse radiator with on-chip antennas for high-resolution 3D imaging." Radio and Wireless Symposium (RWS), 2015 IEEE. IEEE, 2015. Durán, S., et al. "Silicon nanowire based attachment of silicon chips for mouse embryo labelling." Lab on a Chip 15.6 (2015): 1508-1514. Kim, Eric GR, et al. "3D silicon neural probe with integrated optical fibers for optogenetic modulation." Lab on a Chip (2015). Onagi, Takahiro, Chao Sun, and Ken Takeuchi. "Impact of through-silicon via technology on energy consumption of 3D-integrated solid-state drive systems. “Electronic Packaging and iMAPS All Asia Conference (ICEP-IACC), 2015 International Conference on. IEEE, 2015. Power, Jason, et al. "Implications of Emerging 3D GPU Architecture on the Scan Primitive." ACM SIGMOD Record 44.1 (2015): 18-23. Sander, Christian, et al. "Isotropic 3D silicon hall sensor." Micro Electro Mechanical Systems (MEMS), 2015 28th IEEE International Conference on. IEEE, 2015. Sokolovskij, R., et al. "Design and fabrication of a foldable 3D silicon based package for solid state lighting applications." Journal of Micromechanics and Microengineering 25.5 (2015): 055017. Zhang, Xiaowu, et al. "Heterogeneous 2.5 D integration on through silicon interposer." Applied Physics Reviews 2.2 (2015): 021308. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(3D Silicon Chips in Computer Architecture Essay Example | Topics and Well Written Essays - 2000 words, n.d.)
3D Silicon Chips in Computer Architecture Essay Example | Topics and Well Written Essays - 2000 words. https://studentshare.org/information-technology/1881807-3d-silicon-chips-in-computer-architecture
(3D Silicon Chips in Computer Architecture Essay Example | Topics and Well Written Essays - 2000 Words)
3D Silicon Chips in Computer Architecture Essay Example | Topics and Well Written Essays - 2000 Words. https://studentshare.org/information-technology/1881807-3d-silicon-chips-in-computer-architecture.
“3D Silicon Chips in Computer Architecture Essay Example | Topics and Well Written Essays - 2000 Words”. https://studentshare.org/information-technology/1881807-3d-silicon-chips-in-computer-architecture.
  • Cited: 0 times

CHECK THESE SAMPLES OF 3D Silicon Chips in Computer Architecture

Supercomputing Exaflop Target

The K computer consumes enough energy to power nearly 10,000 homes and costs $10 million a year to operate.... The aim of the report “Supercomputing Exaflop Target” is to analyze programming for the CPU or GPU combination popular in super.... hanging the codes from sequential to parallel will not be that easy as the transition might take a long time of about 3 to 5yrs....
4 Pages (1000 words) Assignment

Fabrication of and Characteristics of Silicon Wafer

The essay “silicon Wafer” discusses one of the widely used semiconductor material found abundantly in nature.... silicon conducts electricity depending on the dopants like boron, phosphorous, arsenic, antimony, etc which are placed into its crystal structure.... hellip; The author states that a metal oxide semiconductor (MOS) capacitor has three components, a metal electrode (gate), an insulating film of silicon dioxide and a silicon substrate....
2 Pages (500 words) Essay

The Use of the Chipset in a Computer System

A communications controller in a computer is the perfect example of a chipset that can be provided in this case.... This communications controller is placed between the other functions of the computer, Chipsets perform the task of ensuring that the system's performance is as per the required standards of working mechanical products.... This paper will review the use of the chipset in a computer system.... Almost every office in the world today is equipped with at least one computer (Lyla 56-58)....
5 Pages (1250 words) Essay

The Minnesota Attorney General Can Sue The Mega Computer Chip Manufacturer

MCCM has policies that impose prohibitive burdens or costs on Original Equipment Manufacturers, who are the manufacturers of computers, and who fit in the chips in their computers.... The paper "The Minnesota Attorney General Can Sue The Mega computer Chip Manufacturer " describes the resolution will take place in the state of Minnesota.... hellip; The attorney general can sue the Mega computer Chip Manufacturer in the federal court.... The attorney general can sue the Mega computer Chip Manufacturer in the federal court....
3 Pages (750 words) Assignment

Summary on CPC an Architecture Suited for Josephson and Pipelined Memory Machines

This is only made… ble by ensuring that both the main memory and processor are made from one and same Josephson devices which are then pipelined together using the same pipeline pitch time (Shimizu et al 825). This objective can be achieved by using the Josephson logic devices in creating both Summary on CPC (cyclic pipeline computer) an architecture Suited for Josephson and Pipelined Memory MachinesObjectivesThe objective of this paper is to illustrate the possibility of making use of a high pitch and shallow logic pipelining without necessarily increasing the cost and delay time that are needed in the case of pipeline registers that make use of silicon logic....
1 Pages (250 words) Essay

Neuromorphic Computing Technology

Nawrot (2014) indicates that the design of the network architecture was based and inspired by the nervous system of insects that processes odor.... In order to understand the working of the network architecture that was developed by these scientists, it is imperative to first understand the characteristics of silicon neurons as they are the building blocks that were used it the new architecture.... The silicon neurons are put on a computer chip from where they are able to In this prototype that was built by the scientists, the neurons are linked together in a way that makes them appear and operate like the brain cells....
5 Pages (1250 words) Research Paper
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us