Critical Discussion Responses

Part 1: 100-150 words with references

Topic Question:

Database normalization is a very important process in designing and organizing tables (relations) and their columns (attributes or fields) in a relational database. Therefore, what are the consequences (problems) if a database was designed without it? Would the database still work?

Discussion Post:

Database normalization is a must for any database administrator in order to deliver data quickly and accurately to the users and clients.  "Normalization is part of successful database design; without normalization, database systems can be inaccurate, slow, and inefficient, and they might not produce the data you expect."  Normalization is geared to arrange data into the most logical groups, reduce the amount of duplicated data, ensure that data being changed only has to be changed in one location, and produce a response database.  Without normalization databases would continue to exist, however, they would be slow and inaccurate.  There are different levels of normalization and the database administrators need to determine the most effective normalization level for the system.  Also in instances were there is not a lot of database writes but instead database reads and specifically table joins normalization can require additional queries which slows response time.


Poolet, M. (2017, January 09). SQL by Design: Why You Need Database Normalization. Retrieved November 02, 2017, from

Critical Reply with reference:

Part 2: 100-150 words with references

Topic Question:

Pretend that you are building a Web-based system for the admissions office at your university. The system will be used to accept electronic applications from students. All the data for the system will be stored in a variety of files.

Question: Give an example using the preceding system for each of the following file types: master, look-up, transaction, audit, and history. What kind of information would each file contain and how would the file be used?

[Sources: "CHAPTER 11: DATA STORAGE DESIGN" – Alan Dennis, Barbara Haley Wixom, and Roberta M. Roth (2012). System Analysis and Design, Fifth Edition, John Wiley & Sons.]

Discussion Post:

· Master file Is a collection of records pertaining to one of the main subjects of an information system (PCMag, 2017). This type of file contains descriptive data of the element involve. In this scenario of the admission office for the university, the master file contains specific information of the students. Information like first and last name (can be used as primary key), social security or student number (can be used as primary key), address, date of birth, degree on course, on-line or on campus student. This file contains the primary and necessary information of the students to process everything else like payments, enrollments and transcripts.

Lookup file This type of data file hold static data for use in the system, acting as the relational replacement for computations. Is a table that holds static data, and is used to lookup values. Most of the time, they are used for display purposes rather than computations (Celk0, 2011). For example, the system user can do a lookup for a specific student using his student number as a value to retrieve information from the table that has that value.

Transaction file The data in transaction files is used to update the master files, which contain the data about the subjects of the organization. In the university scenario a transaction file is used when the student changes from one degree to another, changes of address, payment methods, among others. Sometimes this type of file is used for audits and analysis of the business.

Audit file Contains recorded data of transactions within the database tables that is later used for security audits and analysis of changes in the data. The university can use this type of files to audit payments made by students for courses taken and other fees.

History file This file has data stored or archived for later uses like audits, analysis, statistics, projections or backups.  Recovery log files and the recovery history file are created automatically when a database is created (Visser, Wong, 2004).

References        S. Visser & B. Wong. (2004) DB2 universal database, second edition. Sams.

       J. Celko. (2011). Look-up tables in SQL. RedGate Hub. Retrieved from

       PC Mag. (2017). Retrieved from

Critical Reply with reference:

Part 3: 100-150 words with references

Topic Question:

A major public university graduates approximately 10,000 students per year, and its development office has decided to build a Web-based system that solicits and tracks donations from the university's large alumni body. Ultimately, the development officers hope to use the information in the system to better understand the alumni giving patterns so that they can improve giving rates. Question: 1. What kind of system is this? 2. What different kinds of data will this system use? 3. On the basis of your answers, what kind of data storage format(s) do you recommend for this system? [Sources: "CHAPTER 11: DATA STORAGE DESIGN" – Alan Dennis, Barbara Haley Wixom, and Roberta M. Roth (2012). System Analysis and Design, Fifth Edition, John Wiley & Sons.]

Discussion Post:

· While it is true that a large relation DB might work, I believe the sheer volume and data, as well as the type of information they wish to extract, would benefit from a structure that could accommodate data mining – i.e. a data warehouse.

Data warehouses have many features to directly assist the university in learning more about their alumni. According to (2017):

· Data in a data warehouse cover a much longer time frame than data in traditional transaction-oriented databases because queries usually concern longer-term decision making rather than daily transaction details.

· Data warehouses are usually optimized for answering complex queries, known as OLAP, from managers and analysts, rather than simple, repeatedly asked queries.

· Data warehouses allow easy access via data mining software (called siftware) that searches for patterns and is able to identify relationships not imagined by human decision makers

· Data warehouses include not just one but multiple databases that have been processed so that the warehouse’s data are defined uniformly.

The data mining capabilities that a data warehouse offers would be very useful for the university for a variety of reasons. For example, data mining can identify patterns that a human is unable to detect; in particular, the types of patterns decision makers try to identify include associations, sequences, clustering, and trends ("Data Warehouses," 2017). Even further, data warehouses often include data from a wide variety of outside sources that would enable them to extract data on the contribution habits of their alumni. Ultimately, this could give the development officers just what they want –  the ability to better understand the alumni giving patterns so that they can improve giving rates.


Data Warehouses. (2017, August 22). Retrieved October 31, 2017, from

Data warehouse. (2017, October 24). Retrieved October 31, 2017, from

Critical Reply with reference:

Part 4: 100-150 words with references

Topic Question:

Identify appropriate applications for key-value databases.

Discussion Post:

· A Key-Value Database is a really basic type of database that is only composed of two things, a key and values; hence the name Key-Value database. These databases are unsophisticated, but the advantages are great. Key-value databases are much faster than relational databases. A second advantage is they can grow in size without having to completely redesign the database. Thirdly, Key-Value databases are more cost-efficient to grow in comparison to relational databases.

When are Key-Value databases appropriate to use? First, it is commonly used to manage session information in web applications. Secondly, Key-Value (NoSQL) databases can tremendously scale records in high capacity while there are rapid changes with numerous users through with millions of real-time users through dispersed processing and storage. Thirdly, Key-Value databases have built in redundancy, which won’t lose the entire application when some storage nodes are lost.


Reeve, A. (2013, November 25). Big Data Architectures – NoSQL Use Cases for Key Value Databases |           EMC. Retrieved October 31, 2017, from           architectures-nosql-use-cases-for-key-value-databases

Critical Reply with reference:

Part 5: 100-150 words with references

Topic Question:

Identify appropriate applications for key-value databases.

Discussion Post:

· Key–value databases are part of the NOSQL database group. It is distinct with other NOSQL databases in that it has two major components: a Key and data.  The Key is used to point to the rest of the data. The biggest advantage of Key-value databases as compared to other form of databases is the fact that they are very fast.

Handling Sessions in a web application is where key-value database become very useful. When there are many users of a given applications key-value database can be used to control each session. In addition, a key-based database can also be useful in system that have a significant fluctuation in usage during a given time period. There are many situations where there is a very high usage of an application during certain period of time and very low usage at a different time. Key-value database can be used for such applications because they can scale up and down easily as compared to other solutions.

Some examples of Key-value database include Amazon Dynamo and Berkeley DB.


Reeve, A. (2013, November 25). Big Data Architectures – NoSQL Use Cases for Key Value Databases.  Retrieved from

Yegulalp, S. (2017, October 27).  NoSQL standouts: The best key-value databases compared. Retrieved from

Critical Reply with reference:

Part 6: 100-150 words with references

Topic Question:

Discuss the role of consistency in key-value modeling

Discussion Post:

· A key value database or a key value database is a database designed to store, retrieve, and manage associative sets of arrays. These records are stored and retrieved using a key that uniquely identify the record and quickly refines the data in the file. The key value works differently than the RDBMS. RDBMS predefines the data structure in the database as a series of tables containing fields with well-defined data types. On the other hand, key value modelling is a set of two linked data elements: a key, which is a unique identifier for a data element, and the value, which is either the identified data, or a pointer to the data element location.

In distributed computing modeling, the key value can use consistency ranging from eventual consistency to serializability:

Eventual consistency: Occurs when, if no new updates are made to a given data item, all accesses to that item will eventually return the last update value.

Sequential consistency: Occurs when we allow processes to skew in time, so that their operations can take effect before the invocation, or after completion.

Causal consistency: occurs when We do not have to impose the order of each operation of a process. Only the causally related operations must occur in order.

Serializable consistency: Occurs when the history of operations is equivalent to that which occurred in a single atomic order but say nothing about the invocation and completion times.

Consistency comes with costs, stronger consistency models also tend to require more coordination, more messages forward and backward to ensure that their operations run in the right order. So, in practice, it is better to use the hybrid model i.e.: "weaker" consistency models as far as possible, for availability and performance and "Stronger" consistency models if needed, because the running algorithm requires stricter ordering of operations.


Strong consistency models, retrieved from


Critical Reply with reference:

Need a similar essay? Click Order Now and get a special bonus- Up to 15% discounts!!!

You can leave a response, or trackback from your own site.
error: Content is protected !!