Skip to main content

Postgres Check constraint notes

 

efine PostgreSQL CHECK constraint for new tables

Typically, you use the CHECK constraint at the time of creating the table using the CREATE TABLE statement.

The following statement defines an employees table.

DROP TABLE IF EXISTS employees;
CREATE TABLE employees (
	id SERIAL PRIMARY KEY,
	first_name VARCHAR (50),
	last_name VARCHAR (50),
	birth_date DATE CHECK (birth_date > '1900-01-01'),
	joined_date DATE CHECK (joined_date > birth_date),
	salary numeric CHECK(salary > 0)
);Code language: SQL (Structured Query Language) (sql)

The  employees table has three CHECK constraints:

  • First, the birth date ( birth_date) of the employee must be greater than 01/01/1900. If you try to insert a birth date before 01/01/1900, you will receive an error message.
  • Second, the joined date ( joined_date) must be greater than the birth date ( birth_date). This check will prevent from updating invalid dates in terms of their semantic meanings.
  • Third, the salary must be greater than zero, which is obvious.

Let’s try to insert a new row into the employees table:

INSERT INTO employees (first_name, last_name, birth_date, joined_date, salary)
VALUES ('John', 'Doe', '1972-01-01', '2015-07-01', - 100000);Code language: SQL (Structured Query Language) (sql)

The statement attempted to insert a negative salary into the salary column. However, PostgreSQL returned the following error message:

[Err] ERROR:  new row for relation "employees" violates check constraint "employees_salary_check"
DETAIL:  Failing row contains (1, John, Doe, 1972-01-01, 2015-07-01, -100000).Code language: Shell Session (shell)

The insert failed because of the CHECK constraint on the salary column that accepts only positive values.

By default, PostgreSQL gives the CHECK constraint a name using the following pattern:

{table}_{column}_checkCode language: SQL (Structured Query Language) (sql)

For example, the constraint on the salary column has the following constraint name:

employees_salary_checkCode language: SQL (Structured Query Language) (sql)

However, if you want to assign aCHECK constraint a specific name, you can specify it after the CONSTRAINT expression as follows:

column_name data_type CONSTRAINT constraint_name CHECK(...)
Code language: SQL (Structured Query Language) (sql)

See the following example:

...
salary numeric CONSTRAINT positive_salary CHECK(salary > 0)
...Code language: SQL (Structured Query Language) (sql)

Define PostgreSQL CHECK constraints for existing tables

To add CHECK constraints to existing tables, you use the ALTER TABLE statement. Suppose, you have an existing table in the database named prices_list

CREATE TABLE prices_list (
	id serial PRIMARY KEY,
	product_id INT NOT NULL,
	price NUMERIC NOT NULL,
	discount NUMERIC NOT NULL,
	valid_from DATE NOT NULL,
	valid_to DATE NOT NULL
);Code language: SQL (Structured Query Language) (sql)

Now, you can use ALTER TABLE statement to add the CHECK constraints to the prices_list table. The price and discount must be greater than zero and the discount is less than the price. Notice that we use a Boolean expression that contains the AND operators.

ALTER TABLE prices_list 
ADD CONSTRAINT price_discount_check 
CHECK (
	price > 0
	AND discount >= 0
	AND price > discount
);Code language: SQL (Structured Query Language) (sql)

The valid to date ( valid_to) must be greater than or equal to valid from date ( valid_from).

ALTER TABLE prices_list 
ADD CONSTRAINT valid_range_check 
CHECK (valid_to >= valid_from);Code language: SQL (Structured Query Language) (sql)

The CHECK constraints are very useful to place additional logic to restrict values that the columns can accept at the database layer. By using the CHECK constraint, you can make sure that data is updated to the database correctly.

In this tutorial, you have learned how to use PostgreSQL CHECK constraint to check the values of columns based on a Boolean expression.

Comments

Popular posts from this blog

session 19 Git Repository

  🔁 Steps to Create a Branch in Databricks, Pull from Git, and Merge into a Collaborative Branch Create a New Branch in Databricks: Go to the Repos tab in your workspace. Navigate to the Git-linked repo. Click the Git icon (or three dots ⋮) and choose "Create Branch." Give your branch a name (e.g., feature-xyz ) and confirm. Pull the Latest Changes from Git: With your new branch selected, click the Git icon again. Select “Pull” to bring the latest updates from the remote repository into your local Databricks environment. Make Changes & Commit: Edit notebooks or files as needed in your branch. Use the "Commit & Push" option to push changes to the remote repo. Merge into the Collaborative Branch: Switch to the collaborative branch (e.g., dev or main ) in Git or from the Databricks UI. Click "Pull & Merge" . Choose the branch you want to merge into the collaborative branch. Review the c...

Session 18 monitering and logging - Azure Monitor , Log analytics , and job notification

 After developing the code, we deploy it into the production environment. To monitor and logging the jobs run in the real time systems in azure  we have scheduled the jobs under the workflow , we haven't created any monitoring or any matrics . After a few times, the job failed, but we don't know because we haven't set up any monitoring, and every time we can't navigate to workspace-> workflows, under runs to see to check whether the job has been successfully running or not and in real time there will be nearly 100 jobs or more jobs to run  In real time, the production support team will monitor the process. Under the workflow, there is an option called Job notification. After setting the job notification, we can set a notification to email . if we click the date and time its takes us to the notebook which is scheduled there we can able to see the error where it happens . order to see more details, we need to under Spark tab, where we have the option to view logs ( tha...

Transformation - section 6 - data flow

  Feature from Slide Explanation ✅ Code-free data transformations Data Flows in ADF allow you to build transformations using a drag-and-drop visual interface , with no need for writing Spark or SQL code. ✅ Executed on Data Factory-managed Databricks Spark clusters Internally, ADF uses Azure Integration Runtimes backed by Apache Spark clusters , managed by ADF, not Databricks itself . While it's similar in concept, this is not the same as your own Databricks workspace . ✅ Benefits from ADF scheduling and monitoring Data Flows are fully integrated into ADF pipelines, so you get all the orchestration, parameterization, logging, and alerting features of ADF natively. ⚠️ Important Clarification Although it says "executed on Data Factory managed Databricks Spark clusters," this does not mean you're using your own Azure Databricks workspace . Rather: ADF Data Flows run on ADF-managed Spark clusters. Azure Databricks notebooks (which you trigger via an "Exe...