A *ring* $R$ is a set with two algebraic structures, an addition and a
multiplication. With respect to addition, $R$ is a group, and the additive
identity element is denoted $0$. With respect to multiplication, $R$ is not a
group, but a monoid. Thus the multiplication does satisfy the associative law,
and $R$ has a multiplicative identity element $1 = 1_R$ that satisfies $1
\cdot x = x \cdot 1 = x$. (Rings without units assumption do come up in
practice, particularly in analysis, but we will not need rings without unit.)
The addition and multiplication are compatible in that they satisfy both
distributive laws
\[ x (y + z) = x y + x z, \hspace{2em} (x + y) z = x z + y z. \]
If $x y = y x$ for all $x, y$ the ring is called *commutative*.

If $R$ is a group and $\varepsilon \in R$, then $\varepsilon$ is called a
*unit* if there exists $\delta \in R$ such that $\varepsilon \delta =
\delta \varepsilon = 1$. The units form a group, denoted $R^{\times}$. This is
the *multiplicative group* of $R$.

The ring $R$ is called an *integral domain* if it is commutative, $1
\neq 0$ and if for all $0 \neq x, y \in R$ we have $x y \neq 0$. The ring $R$
is called a *field* if it is commutative, $1 \neq 0$, and every nonzero
element is a unit. A field is clearly an integral domain.

A *ring* homomorphism is a map $f : R \longrightarrow S$ between rings
such that $f (x + y) = f (x) + f (y)$, $f (x y) = f (x) f (y)$ and $f (1_R) =
1_S$. We note that since the addition is a group, the rule $f (0) = 0$ is a
consequence of the definition, but the rule $f (1) = 1$ must be assumed.

A (left) *module* $M$ over the ring $R$ (an $R$-*module*) is an
abelian group endowed with a multiplication $R \times M \longrightarrow M$
such that $r (m + m') = r m + r m'$, $(r + r') m = r m + r' m'$ and $1 \cdot m
= m$ for $r, r' \in R$ and $m, m' \in M$. An additive subgroup $N$ of $M$ such
that $R N \subseteq N$ is called a *submodule* of $M$.

If $R$ is a field, a module is called a *vector space*. We will assume
some properties of vector space already known, particularly the theory of
dimension and the determinant for endomorphisms of a finite-dimensional vector
space.

The ring $R$ is a module over itself, and an $R$-submodule $\mathfrak{a}$ of
$R$ is called a *left ideal*. This means that $\mathfrak{a}$ is an
additive subgroup of $R$ and that $R\mathfrak{a} \subseteq \mathfrak{a}$.
(Actually $\mathfrak{a}= 1 \cdot \mathfrak{a} \subseteq R \cdot \mathfrak{a}$,
so $R\mathfrak{a}=\mathfrak{a}$.) There is a dual notion of a *right
ideal*, which is an additive subgroup satisfying $\mathfrak{a}R
=\mathfrak{a}$. If $\mathfrak{a}$ is both a left and a right ideal then
$\mathfrak{a}$ is called a *two-sided ideal*. If $R$ is commutative,
then an ideal is automatically two-sided.

**Exercise 1.2.1:***
If $f : R \longrightarrow S$ is a ring homomorphism, show that the kernel of
$f$ is a two-sided ideal.
*

If $R$ is a ring and $\mathfrak{a}$ is a two-sided ideal, then we may form the quotient group $R /\mathfrak{a}$. In other words, $\mathfrak{a}$ is an additive subgroup of $R$, automatically normal since $R$ is abelian (as an additive group) and so $R /\mathfrak{a}$ is the set of cosets $r +\mathfrak{a}$ with the additive structure defined by Proposition 1.1.4.

**Lemma 1.2.1:***
If $R$ is a ring and $\mathfrak{a}$ is a two-sided ideal, and if $r, s, r',
s' \in R$, and if $r +\mathfrak{a}= r' +\mathfrak{a}$ and $s +\mathfrak{a}=
s' +\mathfrak{a}$, then
\[ r s +\mathfrak{a}= r' s' +\mathfrak{a}. \]
*

**Proof. **(Click to Expand/Collapse)

Write $r' = r + \alpha$ and $s' = s + \beta$ where $\alpha, \beta \in
\mathfrak{a}$. Then $r' s' = r s + \gamma$ where
\[ \gamma = r \beta + \alpha s + \alpha \beta . \]
Each term is in $\mathfrak{a}$ since $\mathfrak{a}$ is a two-sided ideal, so
$\gamma \in \mathfrak{a}$. Now (1.2.1) follows.

The Lemma means that we can define $(r +\mathfrak{a}) (s +\mathfrak{a}) = r s
+\mathfrak{a}$, and this is well-defined. We have defined both an addition and
a multiplication in $R /\mathfrak{a}$, and it becomes a ring, the
*quotient ring*.

The following exercise gives an analog of Theorem 1.1.1 in the context of rings. Roughly speaking, in the world of rings, two-sided ideals are the analogs of normal subgroups; they are the kernels of homomorphisms.

**Exercise 1.2.2:***
Let $\phi : R \longrightarrow S$ be a surjective
homomorphism of rings, and let $\mathfrak{a}= \ker (\phi)$, a two-sided
ideal. Then the isomorphism $R /\mathfrak{a} \cong S$ of additive groups in
Theorem 1.1.1 is an isomorphism of rings.
*

A very useful class of rings are *principal ideal domains*. To define
this, we note that if $R$ is a commutative ring and $x \in R$, then $R x$ is a
two-sided ideal; an ideal of this form is called *principal*. A ring
$R$ is called a *principal ideal domain* if it is an integral domain
and if for every ideal $I$ of $R$ there is an $x \in R$ such that $I = R x$.
We call $x$ a *generator* of the principal ideal $R x$.

Particular principal ideal domains include $\mathbb{Z}$, the ring $\mathbb{Z}[i]$ of Gaussian integers (where $i = \sqrt{- 1}$), and the polynomial ring $F [T]$ in one variable $T$ over the field $F$.

**Exercise 1.2.3:***
Prove that $\mathbb{Z}$ is a principal ideal domain. ( Hint: if
$I = 0$, that is, the zero ideal $\{0\}$, then take $x = 0$; otherwise, take
$x$ to be the smallest positive element of $I$. The division algorithm must
play a role in your proof.)
*

**Proposition 1.2.1:***
Let $G$ be an abelian group, written additively. Then $G$ has
the natural structure of a module over $\mathbb{Z}$.
*

**Proof. **(Click to Expand/Collapse)

If $n = 0$ and $g \in G$ define $n \cdot g = 0$. If $n > 0$, define $n \cdot
g = g + \ldots + g$, where the sum is of $n$ copies of $g$. Finally, if $n <
0$, then $(- n) \cdot g$ is already defined, so let
\[ n \cdot g = - (- n) \cdot g. \]
We leave the reader to check that these definitions make $G$ into a
$\mathbb{Z}$-module.

Many of the familiar facts about elementary number theory work in the context
of principal ideal domains. Particularly, in a principal ideal domain we have
unique factorization into primes. We will not review this, but we do take a
minute to review the notion of a greatest common divisor. If $R$ is an
integral domain, and $a, b$ are elements, we say that $a$ *divides*
$b$, or that $b$ is a *multiple of *$a$, or that $a$ is a
*divisor of* $b$, and write $a|b$, if $b = q a$ for some $q$.

**Proposition 1.2.2:***
Let $R$ be a principal ideal domain, and let $a, b \in R$. Then there exists
an element $d$ of $R$ such that $m$ divides both $a$ and $b$ if and only if
$m$ divides $d$. We may find $k, l \in R$ such that
*

Before we prove this, let us take a moment to parse the phrase "greatest common divisor.'' The conclusion will be that $d$ in the Proposition is the greatest common divisor of $a$ and $b$. Indeed, a common divisor of $a$ and $b$ clearly means an element of $R$ that divides both, as with $m$ in the Proposition. As for "greatest,'' we may order the elements of $R$ by divisibility, assigning the partial order to $R$ in which $x$ preceeds $y$ if $x$ divides $y$. Then $d$ is a common divisor of $a$ and $b$, and it is "greater'' in the sense that it succeeds all other common divisors in this partial order.

**Proof. **(Click to Expand/Collapse)

Let $\mathfrak{a}= R a + R b$. It is clearly an ideal, and since $R$ is a
principal ideal domain, it is $R d$ for some $d$. As $d \in R d = R a + R b$
we can write $d$ in the form (1.2.1). We observe that $a, b \in R a
+ R b = R d$, so $a, b$ are multiples of $d$, and any divisor $m$ of $d$ is
a common divisor of $a$ and $b$. Conversely, if $m$ is a common divisor of
$a$ and $b$, then (1.2.1) implies that it is a divisor of $d$.

If $a, b$ are elements of the integral domain $R$, and if $a = \varepsilon b$
where $\varepsilon$ is a unit, then we say that $a$ and $b$ are
*associates*.

**Exercise 1.2.4:***
Prove that if $d$ and $d'$ are greatest common divisors
of $a$ and $b$ in the principal ideal domain $R$, then they are associates.
*

**Exercise 1.2.5:***
Prove that if $F$ and $K$ are fields and $f : F \longrightarrow K$ is a ring
homomorphism, then $f$ is injective.
*

**Exercise 1.2.6:***
If $R$ is an integral domain, prove there exists a
field $F$ containing $R$. More precisely, construct a field $F$ and an
injective ring homomorphism $i : R \longrightarrow F$.
*

*
Hint: you may generalize the construction of the field
$\mathbb{Q}$ of rational numbers from the ring $\mathbb{Z}$ of integers as
follows. Let $\Sigma$ be the set of ordered pairs $(a, b) \in R \times R$
with $b \neq 0$. Define $(a, b) \sim (a', b')$ if $a b' = a' b$, and let $a
/ b$ be the equivalence class of $(a, b)$. Define addition and
multiplication of fractions as usual:
\[ \frac{a}{b} \cdot \frac{a'}{b'} = \frac{a a'}{b b'}, \hspace{2em}
\frac{a}{b} + \frac{a'}{b'} = \frac{a b' + b a'}{b b'} . \]
Check that these operations are well-defined and make the set $F$ of
equivalence classes into a field. Define $i$ appropriately and check that it
is an injective ring homomorphism.
*

There are many fields containing an integral domain $R$, only one of which is constructed in Exercise 1.2.6. However that particular one is special, as the next Exercise shows.

**Exercise 1.2.7:***
If $R$ is an integral domain and $F, i$ are the field
and injection constructed in Exercise 1.2.6, following the
Hint. Prove that if $K$ is any field and $j : R \longrightarrow K$ is any
injective homomorphism, then there exists a unique ring homomorphism $\mu :
F \longrightarrow K$ such that $j = \mu \circ i$.
*

**Exercise 1.2.8:***
Show that $F$ is characterized up to isomorphism by the
property of Exercise 1.2.7. That is, assume that $F'$ is a
field and $i' : R \longrightarrow F'$ is an injective ring homomorphism.
Also assume that if $K$ is any field and $j : R \longrightarrow K$ is any
injective homomorphism, then there exists a unique ring homomorphism $\mu' :
F' \longrightarrow K$ such that $j = \mu \circ i'$. Prove that $F \cong F'$.
More precisely, prove that there is an isomorphism $\alpha : F
\longrightarrow F'$ such that $\alpha \circ i = i'$.
*

The field $F$ constructed in Exercise 1.2.6 is called the
*field of fractions* of the integral domain $R$.
Exercise 1.2.7 characterizes it by a "universal property,'' and
Exercise 1.2.8 shows that this universal property characterizes it
uniquely. We will formalize this as follows.

Let $\Sigma$ be a set, $\preccurlyeq$ a relation on $\Sigma$. Assume that if
$x \preccurlyeq y$ and $y \preccurlyeq z$ then $x \preccurlyeq z$. Also,
assume that $x \preccurlyeq y$ and $y \preccurlyeq x$ is true if and only if
$x = y$. Such a set with a relation is called a *partially ordered
set*. If $x_0 \in \Sigma$ such that $x_0 \preccurlyeq y$ for all $y \in
\Sigma$ then $x_0$ is called an *initial element*. If $x_{\infty} \in
S$ such that $y \preccurlyeq x_{\infty}$ for all $y \in S$, then $x_{\infty}$
is called a *terminal element*.

We digress now to explain why the word "set'' does not appear in the upcoming Exercise 1.2.10.

Some collections of objects are too large to be sets. For example, in most consistent axiomatizations of set theory there is no "set of all sets'' since the existence of such a set would lead to Russell's paradox. To explain, if $X$ and $Y$ are sets, the notation $X \in Y$ has meaning and is either true or false. If there is a set $\Omega$ of all sets, then $\Omega \in \Omega$ would be a true statement. Then we can define a subset $Y$ of $\Omega$ to be $\{x \in \Omega | x \notin x\}$, and as Russell observed, the statement $Y \in Y$ is true if and only if it is false.

There were two ingredients in Russell's paradox as we have just explained it:
first, a "set of all sets,'' and second, an operation by which we pass from a
set to a subset by selecting the elements that have a certain property (in
this case the property of $x$ that $x \notin x$). It turns out that the first
ingredient is more convenient to give up than the second, and so there is no
set of all sets. Sets form a *class*, by which we mean a collection of
objects that is not asserted to be a set. If $X$ is a class, the notation $x
\in X$ has meaning and is either true or false; however the notation $X \in x$
does not have meaning unless the class $X$ is a set. Some axiomatizations of
set theory give classes; others do not. But even if an axiomatization does not
give classes they can be added without introducing paradoxes. From this point
of view, a class is nothing more than a property $P (x)$ that can be true or
false; then we may write $\Pi =\{x| P (x)\}$, but this is just a notational
convenience. The statement $x \in \Pi$ is defined to mean $P (x)$ and the
statement $\Pi \in x$ is not defined. Thus there is a class $\Omega$ of all
sets, but its meaning is limited to the claim that $x \in \Omega$ if and only
if $x$ is a set. Russell's paradox cannot be deduced.

If $\Sigma$ is a class, we may still sometimes find a relation $\preccurlyeq$
on $\Sigma$ satisfying the defining property of a partial order, namely that
$x \preccurlyeq y$ and $y \preccurlyeq z$ implies $x \preccurlyeq z$. Thus we
speak of a *partially ordered class*. For example, the class of sets is
partially ordered with respect to inclusion.

The following exercises hint at the universal nature of universal properties. However the point of view is in some ways inadequate and they shouldn't be taken too seriously. We will come back to this point later in section 1.3.

**Exercise 1.2.9:***
Prove that a partially ordered set or class can have at
most one initial element, and at most one terminal element.
*

**Exercise 1.2.10:***
Let $R$ be an integral domain. Let $\Sigma$ be the class
of all ordered pairs $(K, j)$ such that $K$ is a field and $j : R
\longrightarrow K$ is an injective ring homomorphism. Define $(K, j)
\preccurlyeq (K', j')$ to mean that there is an injective homomorphism $f :
K \longrightarrow K'$ such that $f \circ j = j'$. Show that this is a
partially ordered class and that $f$ is an initial element.
*

**Exercise 1.2.11:***
Give a similar interpretation of the greatest common divisor in
an integral domain as a terminal element in a partially ordered set, and
explain Exercise 1.2.4 as an instance of
Exercise 1.2.9.
*

**Exercise 1.2.12:***
We were careful not to use the word "set'' in Exercise 1.2.10
but used it in Exercise 1.2.11. Why was that OK?
*