StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Current Challenges Faced by the Synergy Sol - Assignment Example

Cite this document
Summary
The author of the paper under the title "Current Challenges Faced by the Synergy Sol" argues in a well-organized manner that the essential structural planning of the CRIM application has now been upgraded to incorporate with the MS Office suite…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER99% of users find it useful

Extract of sample "Current Challenges Faced by the Synergy Sol"

PART-1 Peter Pan CEO, Synergy Sol Further to our discussion with your team on the current challenges faced by the Synergy solutions. Below are our findings and proposed solutions to overcome the problems. Current challenges faced by Synergy sol In the course of the most recent 4 years, CRIM has been steady, yet with the late Infrastructure update, Synergy Sol has tried for the most recent Windows 8.1 Operating framework and additionally updating CRIM to chip away at Windows Server 2013 Release 8. You have put vigorously in New Windows workstations and new Dell Servers. The essential structural planning of the CRIM application has now been upgraded to incorporate with the MS Office suite. The sup-plantation of the old CRIM application with a MS Office Suite plug-in was performed on the desktop that unites by means of a SQL connector to the database that is running on an overhauled server. On the server side, the CRIM Application Server and Database Server run one DELL server. Records are presently straightforwardly spared from Outlook and MS Word without the client needing to spare a nearby duplicate. With the old application, the client needed to utilize the stand alone CRIM customer application to transfer to the CRIM records administration framework. It was the guarantee of this robotized peculiarity that incited the million dollar overhaul. Issues identified in current system According to the visit we made, certain issues were found to be corrected on the database system. The issues which we found out in your system are as follows: Database is found not to be properly built. It requires performance tuning. Windows OS server is not optimized. CPU is overloaded. There is a memory leakage in the system windows OS configuration needs to be tuned There are lot of operations happening which is causing lot of Disk input/output. Network I/O has to be optimised. There is an issue with CRIM which requires tuning. Below are the solutions for each of the challenges described above 1. Database tuning The way data is being stored in the file store of synergy sol is not correct. Databases are incorporated with a significant piece of joining circumstances – every now and again as sources or focuses on, generally as cross reference or other middle data extension sources and on occasion as a crucial bit of the coordination engine itself. Appropriate use of lists, table space design, information dividing, store setup and sticking are all arrangement choices. It should be done in the database. But here in this system, the tuning is improper. Also, there are few more areas that need to be upgraded, for instance, dealing with the amount of calls to databases through put away techniques versus individual calls, overseeing get sizes, legitimate utilization of tie variables, and upgrading questions to decrease table productions. These as one can all play crucial components in tuning the general combination situation. a) Need to come up with clear requirement The problem with the requirement here is that it does not specify any number of users that can be connected to the database at a time. Accordingly the hardware infrastructure needs to be defined. Characterize the prerequisites and limitations in advance, and outline to meet or surpass those requirements. Figure out the crest burdens ordinary for the general synchronization framework, including top simultaneous clients, requisite reaction times, and top throughputs. b) Define desired outputs A key to execution tuning is great estimations set up to catch helpful information. The system does not also define the SLA required for data fetch. Just knowing the throughput of an interface won't help distinguish the bottleneck. There are an astonishing quantity of individual sorts of information that can be gathered and measuring an excess of can result in its own particular execution issues. There are certain observations and checks that should run time to time but your system lacks on those. These are: system observing – CPU and memory usage, plate I/O, system I/O. Check for memory and CPU asset issues now and again is vital in light of the fact that absence of memory or CPU assets about whether can create and DBAs need to have the capacity to view these assets as well as to relate them to current patterns in workloads. Application checking – line profundities; string pools, association pools, JVM observing, source or target application execution. Screen objects utilization - figuring out whether articles are continually employed, for example, lists, can disclose insight into a large number of transaction sort issues. Very regularly records are poorly put or go out-dated. Keeping record structures clean, and different items, makes for a tight application environment. Monitor different execution proportions - while these degrees have gotten awful press in the past they are still truly substantial in helping discover ranges that could give potential tuning open doors. Database observing – store hit proportion, SQL reaction time, database table action. Have the capacity to oversee articles like tables, records, perspective, triggers, and so on - disgracefully constructed questions, for example, sees, lists, triggers, strategies, and so forth can have sensational effect on inquiry execution. Tackle these issues and numerous execution issues will go away as well. 2. Windows OS server optimization: a. There is no sufficient memory allocated for operating system OS tasks. It needs to be allocated. b. There needs to be sufficient disk space allocated to the swap area of the server. c. Using raw disk for database files can reduce OS and file system overhead. Hence database files need to be allocated. d. Tasks defined to the database are not prioritized properly. They have to be prioritized according to the need. e. Windows OS server version is not compatible with the DBMS version. 3. CPU CPU is the most effortless of the execution components to screen, additionally a standout amongst the most difficult to enhance. There are a few execution observing instruments, yet the straightforward ones are regularly sufficient for some cases –the execution screen for Windows frameworks. This will demonstrate what rate of the CPU is continuously expended at a given point. One common technique for expanding general framework execution to enhance concurrency and throughput is to give parallel execution ways. This is possible alterably (various execution strings), or deterministically (dividing the information through separating conditions, or through hashing calculations on message keys. For parallel execution ways, it is paramount to consider if this is practically legitimate, as a few frameworks may incorporate message sequencing necessities, or in bunch cases, may oblige total operations where all messages of a specific key set must be transformed together. Problem with CPU found to be that the memory for the system is insufficient; the CPU will max out due to disk thrashing because real memory and virtual memory are being swapped on and off of the disk repeatedly.  A fully consumed CPU does not always indicate a fully utilized CPU. 4. Memory In the section of Windows server OS, some memory leakages are found to happen; due to which, proper utilization of the memory is occurring. Memory is likewise for the most part simple to screen utilizing the same simple devices. At the point when a more definite examination of how the memory is constantly utilized is obliged, a suitable demonstrative instrument, for example, Jprobe (for Java situations) can be utilized. For message based joining, it is essential to recollect that numerous reconciliation instruments will add overhead to a message, so that particularly for substantial messages, the memory foot shaped impression can get to be numerous products of the first message size. In the event that the message has numerous supporters, it can have that same foot shaped impression many times over. For bunch based mix, there is much to increase in execution by using memory however much as could reasonably be expected, restricting the costly circle I/O. ETL instruments surpass at this – a few apparatuses stream the messages through memory, minimizing the foot shaped impression while amplifying execution. Remember that the more operations that can be performed in memory without access to the disc, the higher the execution of the framework can be determined. ETL apparatuses with streaming can regularly attain 1000 messages for every second for every CPU utilizing memory streaming, while message based EAI devices infrequently surpass 100 messages for every second for every CPU in light of Disk I/O for each one message independently. 5. Disk I/O There seem to have no optimized number of disc input and output from windows server to database. Disc I/O is regularly the most costly operation which is happening in your systems, and is pervasive crosswise over the reconciliation motor, as well as any applications or databases that the mix touches. All things considered, this isn't simply electrons flying around – there is a physical circle media turning around with a physical per user perusing from and composing on the media. In message based mixes, after the fundamental principles above (just enqueue when essential) goes far towards minimizing the measure of tuning to be carried out. An extra attention for message based mix situations is dealing with the document sizes for irregular access documents – normally the lines. Most lining frameworks can be designed to deal with the span of the documents utilized for lining, the sum these records can develop, and the quantity of records that can be utilized as a part of a consistent line, and additionally the cleanup schedules for these documents. Deciding message size and message speed examples ahead of schedule all the while will empower you to design lines for greatest execution and solidness. For cluster situations, hardware design of the discs can have a significant effect on execution. Present day circle exhibit innovations give some help from definite execution tuning at the Disk I/O level, yet for the most elevated amounts of execution for high volume group joining, even SAN (and comparable) disc administration arrangements can result in deferrals in handling. Remember that a physical circle just has a solitary read or compose head, yet despite the fact that disc reserving can moderate execution issues significantly, a setup where the I/O is for a back to back portion of physical circle will unquestionably execute at a higher general execution than an arrangement obliging hopping here and there and then here again to handle different synchronous (however maybe random) operations. These same fundamentals ought to be utilized for database setups – file tablespaces are very nearly generally put on independent physical disc from information tablespaces on the grounds that both are ordinarily gotten to for a solitary operation. Putting them on physically separate circle diminishes conflict at the physical level. An excess of frameworks are produced without looking into the capacity of discs to stay aware of workload prerequisites. Very regularly Dbas are brought into take care of execution issues that end up being circle I/O conflict/pace and they can do nothing about it. It is essential to comprehend the circle restrictions. 6. Network I/O The easiest govern here is "minimize Network I/O," so that optimize number of calls are made between Windows OS and DB which will result in less CPU utilization as well. To achieve this hardware configuration for network need changes. However this is at times less demanding said than done. In essential mix situations, source messages are transported through an assortment of components (i.e., ftp, lining, API, Adapters, and so on.) to a coordination server, where they are transformed, and sent to one or more targets by means of a comparable set of instruments. Nonetheless, as Service Oriented Architecture, Web Services, and Composite Applications get to be more standard, the quantity of messages running over the system for a solitary "transaction" increments. Through there is regularly no sensible approach to utmost what number of messages are sent over the system, it is off and on again hardware to assess the measure of the messages. If 0.1% of a 2gb record is required for transforming, it may be advantageous to channel out the unnecessary information before sending it over the system. 7. External Application Tuning – CRIM Application Pretty much as database tuning can affect execution, so also can the applications themselves. A few applications are configurable to permit burden adjusting – either alertly, or statically. A legitimate comprehension and use of these strategies and systems is discriminating to the general execution of the mix framework. Tuning of the whole CRIM Application in your system is must because hardware execution testing and tuning ought to incorporate the end to end methodology incorporating the end applications in a sensible setup that imitates a creation design. Best practices to resolve the challenges faced in db systems a. Following Performance Tuning Methodology will be implemented to improve performance Tuning the execution of the nature's turf is an iterative procedure. Collect Data – Use execution observing devices to assemble information amid the Performance and Stress Tests. Identify Bottlenecks – Analyze the information to distinguish execution bottlenecks. Ordinary bottlenecks incorporate database handlers, applications obliging a lot of I/O, or deficient assets. Identify Alternatives – Identify, investigate, and select plan B to address the bottlenecks. Apply Solution – Implement the proposed arrangement. Test – Evaluate the execution after the proposed arrangement has been executed. When one bottleneck is found out, an alternating bottleneck may show up so the procedure starts again at collecting data. b. Following Identified environment Factors to be rectified It is vital to have the right hardware to make the execution objectives possible. Some essential musings on hardware are: Memory – The objective is to have enough memory to run all techniques on a machine without needing to swap forms good and done with virtual memory. Swapping is an extremely time wasteful method. At instance when all procedures work in occupant memory, the execution will be recognizably moved forward. It is essential to check the circle use as numerous databases go to a complete end in light of the fact that plate stockpiling is constrained for review, follow, and caution, database, and file log records. Clearing up or allotting space is much more vital. Search for caution log entrances - the alarm log is frequently your keys to the motor, as it will report mistakes/ issues that are constantly experienced. Having the capacity to rapidly discover caution log messages and relate them to execution issues is vital. Network Bandwidth – Obviously more information can stream over a 1 Gbit switch than a 10 Mbit LAN. Be mindful of alternate frameworks that use the system. In the event that there are numerous applications utilizing the same system, it could influence the execution of the incorporation framework. Network Topology – Depending on the system topology and the quantity of distributers, bundle crash can result in execution issues actually when the system is not completely stacked. A completely exchanged system gives a considerably more effective utilization of the transmission capacity than a straightforward circled system. Over a WAN, productive utilization of data transmission gets to be significantly more basic as the successful transfer speed is frequently just in the 100s of kbits. c. 80/20 Rule Focus on the courses of action that take up the 80% of the transforming. Ordinarily, this will be a little subset of the interfaces. d. Balance Queues Assess the normal throughput of the lines and parity the messages as needs be. For high volume throughput, utilize one line for a given production/membership. Lower volume or clump messages can be gathered together on an alternate set of lines. The objective is to level the messages over a hardware number of lines to addition greatest throughput for a particular transaction. Remember that diverse lining frameworks oversee line arrangement in an unexpected way, and having an understanding of the internal workings will permit the group to plan for enhanced performance. e. Load Balancing – Content Independent One alternative to enhance execution is to disperse the movement among the diverse servers to guarantee that no single server is overburdened. Having the capacity to concentrate and oversee client records and security and figuring out who is doing what, what approvals they have and trimming access can frequently diminish wild clients from doing excessively inside the database and driving up asset utilization. The substance free system makes no suspicions about the substance of the message and courses the message as indicated by straightforward tenets: Round Robin – If there are different servers with comparable setup (CPU, memory, content, and so forth.), the heap is equally and similarly circulated among the servers. Weighted Round Robin – In a few circumstances, the execution of the diverse servers are not equivalent. For this situation, the quicker servers are weighted all the more vigorously so they get a bigger rate of the heap versus the slower servers. Virtual IP – A virtual IP location is an open IP address for the set of servers. An appeal made to this Virtual IP will be conveyed to however numerous frameworks include the server ranch. The heap can be spread out in Round Robin or Weighted Round Robin; on the other hand, outside frameworks interfacing with the EI application just need to know the virtual IP address. f. Load Balancing – Content Dependent With an understanding of the information content, shrewd steering is possible. In spite of the fact that this requires marginally more overhead than the Content Independent techniques, it likewise permits bunch operations or message sequencing to be kept up. Basic systems for sectioning the information are fundamental separating (A-D to processor 1, E-H to processor 2, and so forth.), however for complex sifting keys, or an extensive number of portions, this strategy can be careful and lapse inclined. A basic hashing calculation to consolidate complex keys into a straightforward hash, and afterward steering by sifting on the hash, however more intricate adroitly, is a compelling technique for deterministically adjusting a lot of information crosswise over numerous processors. g. Instance checking and Backups Check that an example is up and running - plainly this can prompt execution issues (absence of any execution) and a DBA must know how to begin and stop an occurrence and verify whether it is accessible. Confirm reinforcements are running and fruitful and have the capacity to perform different recuperations - reinforcements are your help to guaranteeing information is secured. Checking reinforcement has finished effectively empowers you to approve that you can recoup and that a right now running or stuck reinforcement is not bringing about execution issues. h. DB should be designed based on determined Growth patterns and past results Recognize development designs inside the database - this is fundamentally the same to confirming plate use yet the development designs inside a database are in some cases unique in relation to remotely on circle. Verify you have the capacity get a handle on how different database structures become so you can arrange, if necessary, for what's to come. Analyze past results from agendas - it is about difficult to comprehend if execution is great today unless you have something from the past that lets you know, today's execution is more terrible. Without a doubt there are times when frameworks drudgery to a stop yet these are ordinarily, in any event ought to be, unique occasions. See past execution patterns, current application workloads, and make suitable move if execution is an issue. Above are all the problems that Synergy sol database system is facing discussed briefly as well as solution for all of them are provided. XYZ Senior Database Performance consultant Superhero Computer Solutions Limited PTY LTD 1 Macquarie Park Talavera Road Website : www.superhero.com.au Email: sales@superhero.com.au Phone: +61 2 9899 8888 PART-2 Peter Pan CEO, Synergy Sol Further to our discussion with your team on the current challenges faced by the Synergy solutions. Below are our findings and proposed solutions to overcome the problems. Current challenges faced by Synergy sol For each new customer and every ensuing engagement, Synergy Sol logs the action/examination as a case in a standalone application that is utilized particularly for cabin new cases, called Piccolo. The Piccolo client is additionally introduced on the SOE (Standard Operating Environment). Because of the development in the volume of business, the Piccolo framework is presently encountering an assembly of execution issues. The MS SQL 2012 Server database is by all accounts the reason for all the execution issues. Issues recognized in the framework As a major aspect of the examination, after issues were found: The Audit logs and the database dwell on the same hardware base. A little gathering of users are running consistent ad hoc reports against the live database – affecting the execution of the application for all clients. There is by all accounts no database backup frameworks Solution for each of the above is 1. Transaction Log / Audit Log best practices Exchange Server transaction log is an alternate part that causes disk I/Os. Exchange Server composes information to transaction logs, and afterward later submits that information to the Exchange databases. Trade transaction logs are additionally utilized as a part of the tragedy healing methodology ought to an Exchange database fail. Since the transaction logs are an essential piece of the Exchange Server disaster recovery process, it makes sense that they have to be on a different disk from the Exchange databases. Keeping the Exchange transaction logs separate from the databases is paramount from an execution viewpoint, as well. Our Recommendation: Since the majority of the Exchange databases inside an Exchange stockpiling gathering impart a typical set of transaction log documents, you require one devoted volume for every stockpiling gathering for putting away Exchange transaction logs. Moreover, Exchange Server composes transaction logs in a direct manner, so despite the fact that a RAID exhibit is suggested, you can escape with a RAID 1 array without sacrificing execution. Microsoft does propose that the RAID 1 array be SAN-based however. 2. Database Replication and reporting copies Replication is the methodology of replicating and keeping up database protests in numerous databases that make up an appropriated database framework. Replication can enhance the execution and ensure the accessibility of uses in light of the fact that interchange information access alternatives exist. In piccolo, the clients are running ad hoc queries specifically from the live database, which is hampering its execution as it is open for any kind of changes at whatever time. Our Recommendation: A duplicated database must be arranged which will resolve the issue of execution of impromptu questions running against live database. Clients can be exchanged to replicate database for hoc queries which will upgrade the execution. Besides, it ought to be checked from the security perspective. Information must be accessible at all important times and that as well, just to the fitting clients. There is a need to guarantee that the information has been altered by an approved source. 3. Database backup solutions Backing up your databases can secure an association against the unplanned loss of information, database degradation, fittings/working framework crashes or any characteristic catastrophes. You have to verify that the databases are backed up routinely and the reinforcement tapes are put away in a protected area. The DBA must be arranged for circumstances where a disappointment affects the accessibility, respectability, or ease of use of a database. The capacity of the DBA to respond in like manner depends specifically on having an overall arranged methodology to database backup and recovery. Tragically, you don't have any database backup in your framework. Having a decent back up is essential to be arranged for the circumstances where you lose a database or a table gets to be degenerate, you can reload the information from this backup. In the event that you lose your whole server, you will need to set up another one and reinstall your backup programming before you can make utilization of the backup. Our Recommendation As a solution to this problem, a full internal backup plan is recommended. It will back-up the whole database which incorporates a piece of the transaction log. It needs to verify that full backups are performed at whatever point the server is less used amid the day. Amid a full backup, the backup operation essentially duplicates just the information that is accessible in the database to the backup document. The free or unused space which was accessible in the database is totally tossed. DBA can gauge the extent of full backup by utilizing the sp_spaceused framework put away system. Despite the fact that data migrations are basically certain they are once in a while decently arranged and even less every now and again planned for regarding cost and time. A foolish and ineffectual execution of a relocation can result in huge postpones in migration course of events and expense invades or more regrettable, information misfortune and/or income misfortune at last building up and finally finishing in an unnecessarily terrible and time intensive activity. Each application or server migration is exceptional. Hence, setting up a plan to execute application on this system database, migration ought to be based upon the kind of use obliging relocation, the sort of source and goal physical or virtual building design you're moving between, and the information volume, particularly the element information volume obliging migration. Proposed architecture diagram for overall recommended solution: Above are all the problems that Synergy sol database system is facing discussed briefly as well as solution for all of them are provided. XYZ Senior Database Performance consultant Superhero Computer Solutions Limited PTY LTD 1 Macquarie Park Talavera Road Website : www.superhero.com.au Email: sales@superhero.com.au Phone: +61 2 9899 8888 LIST OF ASSUMPTIONS We will get financial support from Synergy Sol for additional database requirement or if any additional hardware infrastructure is required. Any new patches to be applied on operating system will be arranged from the respective vendor by Synergy Sol. A full support will be given from the windows server team for any issues resolution for any related problem. Read More
Tags
Cite this document
  • APA
  • MLA
  • CHICAGO
(Current Challenges Faced by the Synergy Assignment, n.d.)
Current Challenges Faced by the Synergy Assignment. https://studentshare.org/design-technology/2052750-database-troubleshouting-and-performance-tuning
(Current Challenges Faced by the Synergy Assignment)
Current Challenges Faced by the Synergy Assignment. https://studentshare.org/design-technology/2052750-database-troubleshouting-and-performance-tuning.
“Current Challenges Faced by the Synergy Assignment”. https://studentshare.org/design-technology/2052750-database-troubleshouting-and-performance-tuning.
  • Cited: 0 times

CHECK THESE SAMPLES OF Current Challenges Faced by the Synergy Sol

Liquid Gated Biosensor

14 Pages (3500 words) Research Paper

Apple Incorporation. Overview of the Company and its Major Operations

It provides details about the ups and downs faced by the company in the past and how it overcome it.... It provides details about the ups and downs faced by the company in the past and how it overcome it.... Apple faced many challenges in the past mainly because of the stiff competition from Microsoft.... Overview of the company and its major operations and challenges it faces “Apple was founded in Cupertino, California on April 1, 1976 by Steve Jobs, Steve Wozniak, and Ronald Wayne to sell the Apple I personal computer kit” (Apple Inc....
4 Pages (1000 words) Research Paper

Misevaluation Affects the Failure of Merger and Acquisition Activities

The understanding over the impact and influence of misevaluation in the failure of mergers and acquisitions has been presented in a logical and illustrative manner by covering wide arrays of information on the research topic.... … The understanding over the impact and influence of misevaluation in the failure of mergers and acquisitions has been presented in a logical and illustrative manner by covering wide arrays of information on the research topic....
27 Pages (6750 words) Dissertation

Real Estate Industry

In the paper “Real Estate Industry” the author discusses immovable property that includes land over and above everything affixed to it permanently – for instance, buildings.... Often, in contrast with personal property, real estate is deemed tantamount to real property.... hellip; The author states that real estate, in technical terms, denotes the land and all its natural parts for example water and trees, as well as all permanently affixed improvements such as buildings, fences, among others....
21 Pages (5250 words) Research Paper

Eco-Tourism in Koh Phi Phi

The residents faced appaling conditions; standing waste water, strong odours and ground water pollution from overflowing septic tanks.... The Koh Phi Phi Islands Instructor: The Koh Phi Phi Islands Introduction Tourism is often regarded as the fastest growing service industry worldwide, whereas ecotourism is said to be the fastest growing component (the economist, 1998; Chon, 2000, pg....
15 Pages (3750 words) Essay

The Challenges of Successfully Implementing a Knowledge Management Initiative

The writer of this essay aims to analyze the challenges of successfully implementing a knowledge management initiative.... The challenges involved in implementing management with SDL Knowledge management remains crucial in any organization.... Knowledge management is a long-term goal, which comes with extreme challenges.... The challenges include; power & conflict, cross-culture, leadership &organisational culture and security of information (Hislop, 2009)....
16 Pages (4000 words) Essay

ExxonMobil Company Analysis

ExxonMobil Company Analysis Name Institution ExxonMobil Company Analysis Company Background ExxonMobil is an American multinational that deals with a wide range of products in the petroleum and petrochemical product segments.... The firm has professional and committed leaders that ensure that ExxonMobil achieves its objectives....
13 Pages (3250 words) Essay

Chemically-Synthesised, Atomically-Precise Gold Clusters Deposited and Activated on Titania

The paper "Chemically-Synthesised, Atomically-Precise Gold Clusters Deposited and Activated on Titania" presents methods to prepare TiO2 supported Au nanoparticles.... The photo deposition method produces nanoparticles with a high photocatalytic activity required for the generation of hydrogen, etc....
6 Pages (1500 words) Term Paper
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us