Blog

Constraints

Constraints

 

Constraints on the IBM i platform are rules defined on database tables (both DDL and DDS) to enforce data integrity by restricting the types of data that can be inserted, deleted, or updated.

This document outlines the types, benefits, and operation of the named constraints in DB2 as implemented on IBM i.

Named constraints are those that are explicitly given a unique identifier within the database, allowing easier management and clearer error reporting.

 

Types of DB2 Named Constraints on IBM i

 

CONSTRAINT               DESCRIPTION

Primary Key :         Uniquely identifies each record in a table. Cannot be NULL

Unique:                   Ensures all values in the specified column(s) are unique.

Referential:             Enforces a relationship between two dependent tables.

Check:                     Ensures column values satisfy a defined condition/expression.

 

Primary Key Constraints

 

  • Enforce the uniqueness of one or more columns to identify each row uniquely.
  • No columns defined in the Primary Key can be NULL.
  • Only one primary key constraint can be added to an existing table.
  • Existing data is checked for uniqueness before the constraint is applied.
  • Eliminates the possibility of duplicate records (rows) in the table.

 

Unique Constraints

 

  • Ensures that all values in the constrained columns are unique across the table.
  • Can be added to existing tables with a unique constraint name.
  • Existing data is checked for uniqueness before the constraint is applied.

 

Referential Constraints (Foreign Key Constraints)

 

  • Define relationships between tables by requiring that values in one or more foreign key columns (child) correspond to values in primary or unique key columns of another table.
  • Enforced during data modification operations like INSERT, UPDATE, or DELETE.
  • Existing data is checked for the relationship before the constraint is applied.
  • Both parent and dependent tables must reside in the same auxiliary storage pool (ASP).

Check Constraints

 

  • Specify a Boolean expression that must be true for data in a row.
  • Can be applied to one or multiple columns.
  • Violations prevent data modification and raise specific error statuses during I/O operations.

 

Naming and Management of Constraints

 

  • Constraint names must be unique within the table or library and distinct from other constraints.
  • Named constraints appear in error messages when violated, aiding in troubleshooting.
  • Constraint names are also used in both O/S and SQL management commands.
  • Adopting naming conventions is recommended to help identify the type and purpose of constraints more easily.

 

Benefits of Using Constraints

 

  • Embeds the control of uniqueness and relationships into the database as a single point of control.
  • Some limited degree of data validation can be embedded with the use of CHECK constraints.
  • Eliminates the need for multiple application programs to perform the same validations multiple times. This simplifies the application code and logic.
  • Improves performance by reducing the number of I/O operations in the applications.
  • Constraint violations result in standard DB2 error messages, making problems easier to identify and debug.

 

Best Practices

 

  • Always define a PRIMARY KEY for every table.
  • Use “Referential Constraints” to document and enforce parent-child relationships to prevent orphaned records.
  • Adopt a standardized naming convention for constraints.

 

Conclusion

 

Constraints on IBM i are explicit, uniquely identified rules applied to tables to centralize and enforce data integrity. They include primary key, unique, referential, and check constraints. Naming constraints facilitate management, error handling, and modification. Constraints ensure that data adheres to the defined business rules and relationships, preventing invalid data entry and maintaining database consistency.

I want to acknowledge my colleague, Tommy Atkins, for his guidance and assistance with this article.

.

Field Resizing Conundrum IBM i

Database modernization

With a legacy database you will need to change every program that uses the file at the same time you change the file definition. You simply have no choice. Level checking forces you to do so.

With a modern database using I/O servers, you can minimize the effort required. (Terms & Conditions apply)

Conditions:

  1. 1. Add an unused field to the end of the existing file.
  2. 2. Replace all the file IO with an IO server that does exactly what the READ WRITE SETLL CHAIN functionality does. You still need to recompile every program that uses the file, but life gets better after introducing the I/O server. This can be accomplished with minimal programming changes.

The Secret Sauce: (parameters used to call the I/O server)         

  • FUNCTION    I-Insert  U-Update  D- Delete… 
  • POINTER       Pointer is provided by caller of I/O server
  • KEY.               Key field or fields (optional depending on function)

Why locate the record format with a pointer?

  1.  1. The parameter list will not have to change every time you add a field to the file.
  2.   2. Programs using the I/O server will only have to be recompiled when an existing field or the record length changes.

Why add a 20-byte field of unused space to the record?

  To reserve space in I/O server to accommodate adding fields to the file/table in the future without disturbing the running of existing programs. The only programs that need to be recompiled are the ones using the newly added fields. Programs not using the new field are not affected by the change and will continue to work without recompilation.

This investment in IO server will allow you to handle DB Changes with minimal disruption because only one program accessing the file. This also simplifies the administration of the file.

 

Adding a Field

  1. 1. Add the field/column to the file/table.
  2. 2. Be careful not to change the length of the row/ record length
  3. 3.  Compile the I/O server and the programs requiring the new field
  4. 4.  You don’t need to compile anything that is not using the new field.
  5. 5.  Copy data.

Resizing a field:

  1. Make Changes to the file/table.
  2. Copy data from the old format to the new format.
  3. Recompile all programs using I/O server & the I/O server

I/O servers go a long way to simplify the maintenance and administration of a file.

See our open-source page for examples of a modern DB. https://databoroughservices.com/rpg-open-source-project/

Agility Deficit Disorder (ADD) on IBM i 

Database modernization

Do you get the chills at the thought of making database (DB) changes?

ADD symptoms include poor data quality, monolithic programs, poor response times(agility), technical debt, integration issues, and being forced to make massive changes instead of incremental improvements.

Data Quality: It’s not uncommon to find junk records in the database. You may or may not know how they got there. Improve data quality by adding constraints. Constraints are part of the operating system, that manages relationships between files. Bonus: You don’t have to write/maintain any code to do it. Just monitor for the error.

Monolithic Programs: The monster programs critical to your business that contain the business logic. Because of the complexity, this program usually requires a highly skilled resource to work on it. Extensive testing is also needed to avoid any unforeseen problems.  Moving the business logic to the database via event trigger programs removes a lot of complexity from a scary program. Event Trigger programs facilitate a large part of data validation and business logic. When moved into the database engine, it will standardize where it can be found, making maintaining and testing business logic changes much safer. Changing a trigger program is much simpler, faster, and safer than changing a monolithic program.

 Response times: When your users demand information currently unavailable in the database, this becomes a major issue. Adding a field to a file causes a level check and the recompile of every program referencing the file. You can dramatically minimize this pain and improve agility by adopting data-centricity. The complexity and risk can be significantly reduced, between the design and normalization of the DB, I/O servers, and the event trigger programs.

Technical debt:  the implied cost of additional work that arises when software developers choose a quick and easy solution over a more comprehensive one that would take longer to implement. The question becomes, how long can you continue with a program-centric strategy?

Integration: Without moving the business logic to the database, integration with front-end GUI programs is complicated because the business logic has to be duplicated into the API. This complicates things in two ways: the back-end programmer needs to communicate the business rules for the API’s development to the API development team and then maintain it. Using event trigger programs to validate the data, minimizes API complexities and avoids maintaining the logic in multiple places.

Massive Changes: When time and pressure finally demand a DB change, it is scary, especially if it is a commonly used file such as the customer master. These changes could involve so many programs that you must freeze other developments until completion. Agility and cognitive computing (AI) demand that you transition to a data-centric application.

ADD is caused by not keeping up with technological advancements. The cure requires some education, planning, and some pain, but the benefits are substantial.

To see what a modern data-centric application looks like, visit our open-source page.

https://databoroughservices.com/rpg-open-source-project/ 

To better understand the solution you are welcome to book a no-strings-attached discussion.

https://calendly.com/neil-woodhams-dbs/data-centric-application-example

At Databorough Services, we understand the critical role that a well-maintained database plays in ensuring an organization’s smooth operation and efficiency. As the technology landscape continues to evolve rapidly, it becomes essential for businesses to adapt and optimize their databases to meet the demands of modern-day computing.

API’s vs Database Renovation

Database modernization

Functionality:

    • API: An API provides a set of protocols, tools, and definitions for building and integrating software applications. It defines how software components should interact, enabling different systems to communicate. (The bridge between the frontend and the application code)
    • Agile Data Centric Database: An agile database refers to a design approach that emphasizes flexibility and adaptability. It allows for iterative development, making it easier to achieve incremental improvements over time. By implementing event processing, constraints, and the necessary steps to achieve a 3rd normal form, you add agility to the database. 

In a modern application, they both perform essential and separate functions. This post will focus on the functionality that they could share. They both can be used to enforce business rules. An API in a program-centric application likely encapsulates business logic from pre-GUI (Graphical User Interface). In a data-centric world, a significant amount of the business logic can be built into the database and not the API.

Flexibility:

    • API: API’s provide flexibility in integrating different systems and services. They allow developers to access specific functionalities or data from one system to another..
    • Agile Data Centric Database: Agile databases provide flexibility by adapting to changing business needs and requirements. Depending on the level of agility achieved, they can accommodate changes in data structures, relationships, and business rules without requiring significant redesign or disruption.
    •  

The API provides the connection between the database and the GUI. The database contains and maintains the company’s most valuable asset. (DATA)

Development Process:

    • API: Developing an API involves defining endpoints, request-response formats, authentication mechanisms, and documentation. It typically follows a structured process and requires collaboration between teams responsible for building different system components.
    • Agile Data Centric Database: Agile database development follows agile principles, emphasizing iterative development, continuous feedback, and collaboration between developers, business stakeholders, and database administrators. Changes are made incrementally based on evolving requirements and user feedback.
    •  

It is likely faster to develop an API than it is to renovate your database. However, speed is not the only consideration. The ongoing application maintenance should be considered along with the mounting technical debt. When you create an API with business logic included, you must maintain those rules in 2 or more places. When you associate the business rules with a file, through an event trigger program, you consolidate the business logic in one place and reduce the maintenance effort in the future.

In the long term, the database of an IBM i application will need to evolve to meet these future demands, focusing on scalability, performance, security, integration, flexibility, analytics, cloud compatibility, and high availability.

In summary, API’s and agile databases potentially have overlapping purposes. They both play important roles in building modern software systems. APIs should facilitate communication and integration between software components. Agile databases support flexible and adaptive data management, including business logic, while accommodating changing business needs. Together, they enable the development of dynamic and responsive applications that can evolve and scale over time.

For the sake of agility, it is better to make a strategic decision to enforce business rules at the database level rather than in the API, to avoid the programming redundancy and expense of doing it more than once.

Making database changes, such as adding a field or changing a field size, can be intimidating, especially during normalization. I will include an example explicitly created for one of our customers to test and implement file normalization changes safely.

https://databoroughservices.com/2023/04/27/low-risk-file-normalization-modernization/ 

You asked for Modernization

Overview of components that make up DB2

You got a new user interface. Be careful how you throw around the term modernization. You thought a new user interface was all you needed. You forgot about the DB (database). The demands of a modern application may exceed the capabilities of your DB

To make what I’m about to suggest relevant let me share some coding practices  from the past

There was a time when we described files in programs. It was possible to describe a field as numeric in one program and character in another. Then we started using externally described files and it improved the quality of the data a lot.

Since the introduction of externally described files, there have been many improvements. They focus on data quality and the programming effort required to achieve optimum results.

Before event triggers: Offer an ironclad method of vetting every record/row added or updated in a file/table. (No more wondering about how those orphaned records got there) Business logic is associated with the add/ change/ delete function. Data quality is checked at the point of entry. Not even DFU or SQL can get around this.

After event triggers: can keep data up to date in real-time. Traditional end-of-day, week, month, and year processing can be performed as a result of successfully adding, updating, or deleting a record/row.

Constraints: save a lot of programming because they enforce the data relationships and prevent orphaned records. You let the DB management system check the constraints you define. You could continue to perform this check in the programs or simply let the operating system do it.

I/O (input/output) servers: are border control points for Security & rollback considerations. They simplify the administration of security and data usage.

AO Foundation (my solution) can assist you by generating programs that support these modern concepts

    By doing the UI first, you are increasing the technical debt because you will be forced to convert the business logic and the constraints to the language of the new UI and or use an API.  An API is a repackaging of the existing business logic, which means both versions of the business rules will need to be maintained when requirements change.

Side effects of renovating the database first.: Easier to replace the user interface.

Consolidation of the business rules in trigger programs improves agility, thus making it a more strategic solution while minimizing API requirements. Not to mention the cost associated with maintaining several versions of the business rules.

If you’re ready to modernize your application, take advantage of this special offer. Don’t let outdated coding practices hold you back. Upgrade your application today.

https://databoroughservices.com/special-offer/

IT Managers: Why are you spending resources on something the operating system does?

Database modernization

Specifically the database: If you’re not taking advantage of current technology I mean trigger programs, constraints & I/O servers, you’re building up technical debt and it’s going to cost you big time in the future.     

     The sooner you accept that a modern database will serve you better than using program centric techniques to do things that are handled by the operating system, the sooner you will realize the benefits.

The problem with enforcing data integrity in monolithic programs is it requires programming effort for something that can and should be done by the operating system. 

One problem with keeping the business logic in a monolithic program is a file may be added to and maintained by multiple programs. Which creates a need to keep the business logic in sync in multiple programs. By consolidating the business logic in one place (trigger program/ service programs) you negate the need to do so in the application programs. It is simply the best place to do the job

Time for a new User Interface: It doesn’t matter what language you’re going to use for the new UI. If you’re not using trigger programs and constraints you will be forced to convert the referential integrity logic and the business logic to the new language. This means new programming skills will be needed and old programming skills will not.

You also need to consider what future demands are going to be put on your data, in the form of quality and security. I’m saying it’s better to deal with the database before the user interface. You may find yourself in a situation where you are forced to redo the user interface to meet your data requirements.

   To see an example of a modern database using advanced RPGLE programming techniques you can download a savf with a small 5-file application from this open source page.

Kicking the can down the road

IBM i Database modernization

The legacy application dilemma.

The most common way to kick the can down the road is by modernizing the user interface without modernizing the database.

Another way to kick the can down the road is to modernize your database using surrogate logicals.

Reasons why the UI is chosen to be modernized before the database?

The boss says I want a modernized application. Most of all I want a new user interface. 

It’s easier to just address specific issues as opposed to taking a larger more strategic approach.

Failure to recognize the implications of doing the UI first or the benefits of the MVC architecture.

 Fear of risk

Strategic Argument

Doing the user interface first is counterproductive because you will be forced into converting the business logic from the legacy user interface to the modern browser language. That means new programming skills will be required and old programming skills will not. Secondly, it doesn’t get you any closer to a modern architecture MVC (Model View Controller).

In contrast, by moving the business logic from the legacy UI to trigger programs and taking advantage of DB2 constraints you will be making good use of existing RPG skills, as well as making the task of modernizing the UI simpler.

Surrogate logicals do not give you the full functionality of SQL.

Risk can be managed by running parallel databases. Simply add an after trigger to the original file, which will execute the new trigger program version of the same file (test library). Most importantly this allows for testing until satisfied.

Download a sample of a modernized 5 file application.

Check out a special offer to have one of your files modernized.

Sample Modern Database

This example contains several test programs used to test the modernized database. The test programs can be found in the source file SRCIOS  they can be modified to add records or update records. Remember that referential constraints are in place and validations are performed in the trigger programs. You need to be ILE literate to understand and work on these programs. Don’t be afraid to contact us for assistance. 

Download Sample

DB2

DB2 is a relational database management system (RDBMS) developed by IBM. It was first introduced in the 1980s and has since gone through many iterations and updates.

DB2 is designed to manage large amounts of structured data and provides a wide range of tools and features to ensure data integrity, reliability, and security. It supports SQL (Structured Query Language), which is a standard language for accessing and manipulating data in a relational database.

Some of the key features of DB2 include:

  • Scalability: DB2 is designed to handle large amounts of data and can scale up or down depending on your needs.
  • Security: DB2 provides a range of security features, including encryption, access controls, and auditing, to ensure that your data is protected.
  • Availability: DB2 is designed to be highly available, with features like automatic failover and backup and recovery options.
  • Compatibility: DB2 is compatible with a wide range of operating systems and platforms, including Windows, Linux, and UNIX.
  • Performance: DB2 is optimized for performance, with features like data compression and indexing to ensure that queries and transactions run quickly and efficiently.

Overall, DB2 is a powerful and versatile RDBMS that is widely used in enterprise environments for managing large amounts of structured data.