StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Relational Database Management System and Web Integration - Report Example

Cite this document
Summary
The report "Relational Database Management System and Web Integration" focuses on the critical analysis of the main issues concerning Relational Database Management System (RDBMS) and web integration that plays the role of ensuring that there is an effective organization of data in related columns and rows…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER99% of users find it useful

Extract of sample "Relational Database Management System and Web Integration"

Introduction

Relational database management system (RDBMS) plays the role of ensuring that there is effective organization of data in related columns and rows (Anagnostopoulos, Zeadally & Exposito, 2016). As such, it has the features of storing data in the tables, tables have columns and rows, SQL is used to create the tables, and the entered data on the tables is retrieved through the use of SQL. RDBMS program creates an opportunity for the user to create, administer, and update the relational database. Structured query language (SQL) is what is used in most of the commercials as a means of accessing the database.

Available RDBMS

The available products of RDBMS are Oracle, SQL Server, and IBM’s DB2. The systems used widely across the globe include SQLite (open source), the PostgreSQL (open source), Miscrosoft SQL Server, MySQL (open source), and Oracle.

Alternatives

Majority of the organizations engage in data management while using the RDBMS. However, there are those businesses that participate in the operation of the application systems while using the flat files as a means for the storage of data (Anagnostopoulos, Zeadally & Exposito, 2016). Such include the legacy batch systems, which do not have the capacity of supporting the transactions of online data. The flat file could easily be stored on the hard drive or the computer tape.

In 1980s, there were popular network databases that include IDMS. These were common at a time when the computers that existed did not have the power and capability as the ones that are present in the current generation (Williams, 2014). The network databases had the capability of supporting the online transactions, but they were inflexible. Thus, the design of the network was accompanied by the challenge of implementing the changes. At this time, there was also the hierarchical databases.

There is also the NoSQL database that is involved in the storage of the BLOB multimedia content. As such, NoSQL has several varieties, which include the graph, key value pair, column oriented, and document oriented (Kovacheva, Naydenova, Kaloyanova & Markov, 2017). The classification into these groups varies in accordance to the structure. Further, NoSQL is widely used in the process of storing data, which is unstructured.

Memory cache is another alternative to RDBMS that is involved in reading and writing to the memory of an object (Kovacheva, Naydenova, Kaloyanova & Markov, 2017). As such, the data is held in the memory container. These containers include arrays, hashes, and key-value pairs. Memory cache is used since it is fast, works effectively with a single machine, when the machine is restarted the data is lost, and there is a limitation to the maximum of the machine RAM. The database is also good for the search scenarios, which are often repeated.

Advantages of RDBMS

RDBMS has the benefit of the data structure where the information is organized in a tabular format, which is easy and simple for the users to use and understand. As such, RDBMS ensures that there is accessibility of the data in an organized and natural manner. The program also ensures that several users can access the program simultaneously. Such occurs through the functionality of the built-in transaction and locking management that enables people to access the data that is undergoing changes and ensure that there are no collisions, which are experienced during the update of the data and records.

Privilege and authorization RDBMS control features ensures that the administrators of the database have a restricted access to users that are authorized and gives individual privileges on the database tasks, which need to be performed (Williams, 2014). The system also ensures that there is network access for the users via the server daemon. Thus, developers use the network access as a means of building the tools for the desktop and the applications of the Web for the interaction of the databases. The model of relational database is not fast, but the advantage of the RDBMS that include simplicity ensures that the slower speed is a great trade-off. RDBMS optimizations and database design ensures that there is great performance to offer the applications a fast speed for all data sets.

As compared to other systems, RDBMS ensures that the administrators can easily maintain it, repair, test, and back up databases (Helland, 2016). There is also the automation tools, which is a function that the operating system has to utilize to be successful. RDBMS also supports the SQL, which has a simple syntax. The used language is also easier to learn and understand.

Disadvantages of RDBMS

RDBMS has the limitation of data complexity. As such, the data occurs in several tables and interconnected through key values that are shared. Further, RDBMS tends to have records and keys that are broken. The keys, which are shared are needed to link information so that it spreads across different tables.

The performance of the hardware is also affected by the RDBMS. As such, the processing power needed is sophisticated, which makes smaller business that lack the capacity to invest on this database to enjoy its benefits. Further, the database needs experienced and expert developers to ascertain that it is able to offer the benefits needed in society. Such increases the risk for the development of application and database for use among people in different organizations.

Big Data

Big data refers to large volume of data, which either could be unstructured or structured that inundates the business activities of an organization on daily basis (Helland, 2016). The amount of data is not very important as compared to what the organization does with the data that is gathered. Thus, big data is analyzed to offer insights that help in having better strategic business moves and decisions that lead to the achievement of the desired success level in business.

Importance of big data

The essence of big data revolves around what is done with data, but not how much data a person has. Thus, one can access data from different sources and engage in analysis to have answers that assist in time reductions, cost reductions, smart decision making, and new product optimization and development. When the big data is combined with the high-power analytics, it becomes possible to accomplish different business-related tasks, which include the recalculation of the risk portfolios, determination of the causes of defects, issues, and failures in real time. Further, it becomes possible to generate coupons at the sale points on the basis of the buying habits of the customers. the organization also gets a chance to detect any possible fraudulent behavior before it occurs in the organization.

Characteristics

Big data is characterized of volume, variety, veracity, variability, and velocity. Volume relate to the quantity of the stored and generated data. Data size helps in the determination of the potential and value insight, which clarifies whether it can be categorized as big or small data. The variety of data relates to the nature and type of data (Corbellini, Mateos, Zunino, Godoy & Schiaffino, 2017). As such, individuals who engage in data analysis use variety to gain insight on the message or the content of data. The velocity of data refers to the speed of processing and generating data such that it meets the challenges and demands in the path of development and growth. Veracity depicts the quality of data that is captured and can vary significantly such that it affects accurate analysis. The variability of the data is associated with the inconsistency found in the data, which can hamper the process of data management and handling.

Tools required

There are several tools, which are used in the process of data analysis. These tools facilitate in ensuring that the management process of data occurs efficiently and the required success is achieved. Apache Hadoop is software framework that is based on Java, which assists in having an effective storage of the large volumes of data within a cluster system. As such, the framework runs parallel to the cluster and ensures that data processing on all the nodes occurs successfully. Microsoft HDInsight is another tool that utilizes the Windows Azure Blob storage as the file system, which is default and offers low cost availability.

Hive tool plays a significant role in supporting the mining of data while Sqoop connects the Hadoop to several relational databases to ensure that it is possible to transfer data (Corbellini, Mateos, Zunino, Godoy & Schiaffino, 2017). The PolyBase works on SQL Server, which is beyond 2012 and plays the role of relational data processing. Presto is build to handle data querying from social media sources that include the facebook. The excel assists in the process of analyzing big data. Cassandra is a tool that ensures that there is effective management of the large volumes of data. The database has high scalability and availability without any compromise on the cloud infrastructure and commodity hardware. Plotly and Bokeh tools execute the function of ensuring that there is visualization of data.

Volumes of data

Organizations gather data from different sources. These include social media, business transactions, and information from machine-to-machine or sensor data (Golov & Rönnbäck, 2017). In the past, storing the big data was a challenge, but the development and discovery of the new technologies that include Hadoop has assisted in easing the burden for the big data.

Types of data

The classification of big data is done as structured, unstructured, or semi-structured data. The structured data is widely utilized to make reference to what is already in store in the database in an orderly manner. Thus, structured data helps in accounting for approximately 20 percent of the data that exists and it is widely utilized in computer-related activities and programming. The sources of structured data are humans and machines. The data obtained from the web logs, sensors, and financial systems is placed under the category of the machine-generated data. Such includes GPS data, medical devices, and statistics data captured by applications and servers. The human-generated structured data comprises of the data that is input by human beings to the computer that includes personal details or name.

The unstructured data is located on the traditional databases row-column and there is no clear format for the storage of this data. The data is also split into the categories of human-generated or machine-generated. The machine based unstructured data includes the scientific data from experiments, satellite images, and radar data. Human-generated unstructured data can be found in the internet as website content, mobile data, or social media data.

Semi-structured data refers to the information, which does not exist in the traditional database in any format as the data that is structured, but it has the organizational properties for ease in processing (Jukić, Sharma, Nestorov & Jukić, 2015). An example of such data us the NoSQL documents that have keywords that assist in effective processing of the documents.

Data storage

Organizations across the world are in agreement that the heart of decision-making is data. Thus, an infrastructure has to be established, which aids in the transformation of data into insights and giving meaning to the dark and unstructured data that helps in performing quick actions. The storage of data should embrace the elements of accessibility, speed, and efficiency (Golov & Rönnbäck, 2017).

The storage should ensure that there is flexible, secure, and shared accessibility of the data so that it can perform its intended functions without any bottlenecks. Such a system for data storage has to maximize on the speed so that users can access the data without delay and ensure that the data has the potential to serve its intended purpose in the organization. Finally, the data storage has to have efficiency to make the functionality of data reliable.

There are different types of data storage, which include the distributed file systems, NewSQL Databases, big data querying platforms, and the NoSQL Databases. The distributed file systems that include the Hadoop File System (HDFS) provide the potential to have a large amount of unstructured data storage on the commodity hardware reliably. HDFS is designed to ensure that it can store large files of data and it is suited for bulk processing and quick data ingesting.

NewSQL Databases depicts a modern relational database with a scalability equivalent to that of NoSQL while maintaining the guarantees for the transactions. NoSQL Databases is the essential one for the storage of data (Jukić, Sharma, Nestorov & Jukić, 2015). The NoSQL utilizes modern models of data. The big data querying platforms refer to the technologies that help in the provision of the facades of data for large storages that include the distributed file systems, as well as the NoSQL databases.

Sources of data

Often, big data has varieties that include machine data, social data, and transactional data. The social media data offers remarkable insights to the organizations on the behavior of consumers. Such includes gathering information from the tweets and facebook posts on the reaction of customers towards the different products, which are manufactured by a firm. Further, there are YouTube downloads that offer information on the functionality of the organization.

Machine data relate to the generated information from the industrial equipment, monitor machinery, web logs engaged in the tracking of the behavior of users online, and the real-time data gathered from the sensors. On transactional data, the B2B companies, as well as the large retailers have the chance to generate large volumes of data regularly. Such occurs since the transactions consist of items that include product IDs, payment information, prices, distributor data, and manufacturer. Big firms that include Amazon.com have the chance to generate this type of data. Further, the big data easily resembles the unstructured, structured, or high frequency information.

The unstructured data is obtained from the information that is not interpreted or organized within the traditional data models or databases (Kumar, Niu & Ré, 2013). This includes the twitter tweets, metadata, and social media posts. The multi-structured data refers to the types and formats of data that is obtained from the interaction of machines and people such as the social networks of web applications. Data could also be obtained from the electronic files and the broadcastings.

Data transactions

The transactions that occur through the electronic platform are fast growing and becoming the largest in the world where the financial environment is taking route. However, there are complexity issue and data volume concerns, which could be overwhelming (Abbasi, Sarker & Chiang, 2016). As such, majority of the channel managers, operators, and marketers realize the issue with the data when they leverage and harness it. The transaction data also occurs as a predictive means for the feeding of the organization on the performance, as well as the delivery of the services, and making customers happy.

However, mining of the transaction data could be time consuming and costly. In the past, the work of extracting, correlating, and collecting actionable transaction data has been challenging (Kromer, 2014). The situation has occurred because of the growing volumes and the diversity of the payment transactions and electronic banking. There is also the omni-channel convergence of the Mobile, Branch, POS, ATM, and Internet Banking channels. Globalization is also occurring, which is altering the normal functionality of the financial organizations, as well as increasing the disparities that exist between the performance and the work of the banking technology platform.

Security

In the use of big data, there are several considerations, which done to the business. The focus is to secure the customers and the organization (Kumar, Niu & Ré, 2013). Hence, the security measures are developed to execute these functions and make the organization safe so that it can continue conducting its business activities without major challenges. Breaches in big data create a significant problem to the organization, which could lead to the development of serious legal repercussions and reputational damage on the firm.

Techniques that include the attribute based encryption are vital to ensure that the organization has the potential and ability to protect data that is sensitive, as well as apply access controls (Abbasi, Sarker & Chiang, 2016). There is also the deployment of the big data from fraud detection and using the security incident and event management (SIEM) systems that appear to be attractive in several firms. The management of the SIEM output, as well as the logging systems helps in making big data save. Furthermore, there commercial replacements, which are available that helps in securing the log management systems. Technology could also be deployed to improve on the security level of the big data.

The increase in the web-based adoption, and cloud and mobile applications has made sensitive data to be highly accessible from various platforms (Kromer, 2014). These platforms are vulnerable to hacking if they are free or are of low-cost. Currently, organizations engage in the collection and processing of large amount of information. Thus, the storage and use of this data should be given adequate consideration to ensure that the image of the firm is not destroyed and there is protection of client information. Poor IT security could destroy even the reputation of a company.

Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Relational Database Management System and Web Integration Report Example | Topics and Well Written Essays - 2750 words, n.d.)
Relational Database Management System and Web Integration Report Example | Topics and Well Written Essays - 2750 words. https://studentshare.org/information-technology/2092854-relational-database-management-system-and-web-integration
(Relational Database Management System and Web Integration Report Example | Topics and Well Written Essays - 2750 Words)
Relational Database Management System and Web Integration Report Example | Topics and Well Written Essays - 2750 Words. https://studentshare.org/information-technology/2092854-relational-database-management-system-and-web-integration.
“Relational Database Management System and Web Integration Report Example | Topics and Well Written Essays - 2750 Words”. https://studentshare.org/information-technology/2092854-relational-database-management-system-and-web-integration.
  • Cited: 0 times
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us