Resolution is a powerful theorem-proving technique invented by mathematician John Alan Robinson in 1965. It establishes the truth of a statement by building refutation proofs using unification to combine statements and resolve them in conjunctive normal form (CNF). In First-Order Logic (FOL), resolution involves converting statements to CNF, applying unification, and resolving clauses until the empty clause is derived. For example, to prove P → R from P → Q and Q → R, we convert to CNF, negate the conclusion, and resolve step-by-step. The resolution rule in FOL resolves two clauses with complementary literals. For instance, Loves(f(x), x) and ¬Loves(a, b) can be unified to generate Animal(g(x)) ∨ ¬Kills(f(x), x). This method is particularly useful in fields like artificial intelligence and formal verification. To prove that John likes peanuts, we convert facts to FOL statements, transform them to CNF, negate the statement, and resolve step-by-step until a contradiction is found, proving the original statement true.
Resolution is a theorem-proving technique used to establish the truth of a statement by building refutation proofs, or proofs by contradiction. This method was invented by mathematician John Alan Robinson in 1965.
When multiple statements are given, and a conclusion needs to be proven from those statements, resolution can be used. A key aspect of this process is unification, which is crucial for combining statements to reach a conclusion. Resolution operates efficiently on statements in conjunctive normal form (CNF) or clausal form.
Clause: A clause is a disjunction of literals (atomic sentences). It's also known as a unit clause if it contains only one literal.
Conjunctive Normal Form (CNF): A sentence is in CNF if it is a conjunction of clauses.
In First-Order Logic (FOL), resolution involves the following steps:
Given statements:
And we want to prove P → R.
Convert to CNF:
Negate the conclusion and convert to CNF:
Combine and resolve:
Since we derived the empty clause, the original conclusion P → R is proven by contradiction.
Resolution in FOL is a powerful method for automated theorem proving, particularly in fields like artificial intelligence and formal verification.
The resolution inference rule for first-order logic is an extension of the propositional resolution rule. It operates by resolving two clauses that contain complementary literals. These literals are standardized apart to ensure they share no variables.
The rule can be visualized as:
l1 ∨ ... ∨ lk, m1 ∨ ... ∨ mn SUBST(θ, l1 ∨ ... ∨ li-1 ∨ li+1 ∨ ... ∨ lk ∨ m1 ∨ ... ∨ mj-1 ∨ mj+1 ∨ ... ∨ mn)
Where li
and mj
are complementary literals, and θ
is the unifier that makes them complementary.
This rule is often referred to as the binary resolution rule because it resolves exactly two literals.
Consider the following clauses:
Animal(g(x)) ∨ Loves(f(x), x)
]¬Loves(a, b) ∨ ¬Kills(a, b)
]Here, the complementary literals are Loves(f(x), x)
and ¬Loves(a, b)
.
Using the unifier θ = [a/f(x), b/x]
, we can unify these literals and generate the resolvent clause:
[ Animal(g(x)) ∨ ¬Kills(f(x), x)
]
Given the following facts:
We need to prove by resolution that: John likes peanuts.
∀x (Food(x) → Likes(John, x))
Food(Apple)
and Food(Vegetable)
∀x ∀y (Eats(x, y) ∧ ¬Killed(x, y) → Food(y))
Eats(Anil, Peanut)
and ¬Killed(Anil, Peanut)
∀x (Eats(Anil, x) → Eats(Harry, x))
∀x (Food(x) → Likes(John, x))
becomes ¬Food(x) ∨ Likes(John, x)
Food(Apple)
(already in CNF)Food(Vegetable)
(already in CNF)∀x ∀y (Eats(x, y) ∧ ¬Killed(x, y) → Food(y))
becomes ¬Eats(x, y) ∨ ¬Killed(x, y) ∨ Food(y)
Eats(Anil, Peanut)
(already in CNF)¬Killed(Anil, Peanut)
(already in CNF)∀x (Eats(Anil, x) → Eats(Harry, x))
becomes ¬Eats(Anil, x) ∨ Eats(Harry, x)
Negate the statement "John likes peanuts": ¬Likes(John, Peanut)
Combine all the clauses including the negated statement and resolve them step-by-step:
¬Likes(John, Peanut)
¬Food(x) ∨ Likes(John, x)
with x = Peanut
: ¬Food(Peanut) ∨ Likes(John, Peanut)
¬Likes(John, Peanut)
: Result: ¬Food(Peanut)
¬Eats(x, y) ∨ ¬Killed(x, y) ∨ Food(y)
with x = Anil
and y = Peanut
: ¬Eats(Anil, Peanut) ∨ ¬Killed(Anil, Peanut) ∨ Food(Peanut)
Eats(Anil, Peanut)
and ¬Killed(Anil, Peanut)
: Result: Food(Peanut)
¬Food(Peanut)
and Food(Peanut)
to derive a contradictionThe resolution graph begins with the clause ¬likes(John, Peanuts). In the first step, this clause is resolved with ¬food(x) ∨ likes(John, x) by substituting {Peanuts/x}, resulting in the clause ¬food(Peanuts). In the second step, ¬food(Peanuts) is resolved with ¬eats(y, z) ∨ ¬killed(y) ∨ food(z) using the substitution {Peanuts/z}, which simplifies to ¬eats(y, Peanuts) ∨ killed(y). The third step involves resolving ¬eats(y, Peanuts) with eats(Anil, Peanuts) by substituting {Anil/y}, leading to killed(Anil). In the fourth step, killed(Anil) is resolved with ¬alive(k) ∨ ¬killed(k) using the substitution {Anil/k}, resulting in the clause ¬alive(Anil). Finally, ¬alive(Anil) is resolved with alive(Anil), producing the empty clause {}. This contradiction indicates that the negation of the initial statement is false, thus proving that "John likes peanuts" is true by resolution.
The contradiction proves that the negation of our statement is false. Therefore, the original statement "John likes peanuts" is proven true by resolution.
The resolution rule in FOL is a powerful tool for automated theorem proving, particularly in fields like artificial intelligence and formal verification. It allows for the systematic resolution of clauses, enabling the derivation of new knowledge from existing statements.