% File: algebra.tex % Section: MATH % Title: Basic algebra % Last modified: 04.06.2002 % \documentclass[a4paper,12pt]{article} \usepackage{amsmath,amssymb} \textwidth 16cm \textheight 25cm \oddsidemargin 0cm \topmargin -1.5cm \pagestyle{empty} \begin{document} \begin{center} \large\textbf{BASIC ALGEBRA} \end{center} \vspace{.0cm} \begin{center} \large\textbf{Vector spaces} \end{center} \vspace{.1cm} An ($n$-dimensional) vector space $V$ (over a field $\mathbb{K}$) consists of \emph{vectors}\,, i.e., formal sums \begin{equation} \label{V} x = x^i e_i \ \ \ \ \ \ \ \ x^i \in \mathbb{K}\,, \ \ \ \ \ \ \ \ i = 1, 2, \ldots , n\,. \end{equation} Usually, $\mathbb{K}$ will be the field $\mathbb{C}$ of complex numbers. One can add vectors and multiply them by numbers (elements of $\mathbb{K}$)\,, according to obvious rules (distributivity, etc.). The $n$ elements $e_i$ form a \emph{basis} of $V$; they are primary objects in this setting. Also, they are linearly independent, i.e., $x=0$ if and only if all $x^i=0$\,. Hence, any vector $x\in V$ admits a unique expansion (\ref{V})\,, $x^i$ being its coordinates with respect to the basis $\{e_i\}$\,. In infinite-dimensional spaces, a basis $\{e_i\}$ is assumed to be infinite, linearly independent set, whereas the vectors are finite sums of the form (\ref{V})\,. In other words, each vector $x$ may have only a finite number of nonzero coordinates $x^i$\,. A \emph{dual} space $V^*$ (over the same field $\mathbb{K}$) consists of maps $f:V\rightarrow\mathbb{K}$ (linear functions on $V$)\,. In the notation $f(x)\doteq\,<\!f,x\!>\in\mathbb{K}$\,, this reads ($x,y\in V, \ f,g\in V^*, \ \alpha, \beta \in\mathbb{K}$)\,: \begin{equation} \label{} <\!f, \alpha x + \beta y\!> = \alpha<\!f,x\!> + \,\beta<\!f,y\!>\,, \ \ \ \ \ <\!\alpha f + \beta g\,, x\!> = \alpha<\!f,x\!> + \,\beta<\!g\,,x\!>. \end{equation} $V^*$ is also $n$-dimensional vector space with the basis $\{e^i\}$ such that \begin{equation} \label{V*} <\!e^i,e_j\!> = \delta^i_j\,, \ \ \ \ <\!e^i,x\!> = x^i\,, \ \ \ \ f = f_i e^i\,, \ \ \ \ f_i = <\!f,e_i\!>\,, \ \ \ \ <\!f,x\!> = f_i x^i\,, \end{equation} so $e^i$ are coordinate functions for the vectors in $V$. For a finite $n$ one has $(V^*)^*\sim V$\,, whereas in infinite-dimensional case only $V\subset(V^*)^*$ is generally true. A direct sum of spaces $V$ and $W$ (over $\mathbb{K}$) with bases $\{e_i\}$ and $\{e'_j\}$\,, respectively, is a $(n+m)$-dimensional space $V\oplus W$ with the basis $\{e_1,\ldots e_n,e'_1,\ldots e'_m\}$\,, so that any $z\in V\oplus W$ can be unambiguously decomposed into a sum of $V$- and $W$-parts: $z=x+y'$\,. For the same $V$ and $W$\,, the tensor product $V\otimes W$ (or, more accurately, $V\otimes_\mathbb{K} W$) is a $nm$-dimensional vector space whose basis $\{e_i\otimes e'_j\}$ is assumed to consist of elementary, structureless objects $e_i\otimes e'_j$\,. Bilinearity \begin{equation} \label{otimes} x\otimes y' = x^i e_i\otimes y^j e'_j = x^i y^j (e_i\otimes e'_j) \end{equation} shows that $\otimes$ provides the linearization of the set-theoretic notion of the direct product: for example, $(2x,y)\neq(x,2y)$ as mere pairs, whereas $2x\otimes y = x\otimes 2y = 2(x\otimes y)$\,. It is customary to identify \begin{equation} \label{} V\otimes\mathbb{K} \simeq \mathbb{K}\otimes V \simeq V\,, \ \ \ \ \ \ e_i\otimes 1 \simeq 1\otimes e_i \simeq e_i\,, \end{equation} where $\mathbb{K}$ is treated as a 1-dimensional space over itself. A linear map (linear operator, endomorphism) $M:V\rightarrow V$ is completely specified by its action upon the basis, which can be described in terms of matrix elements $M_i^j$\,: \begin{equation} \label{endom} M(\alpha x + \beta y) = \alpha Mx + \beta My\,, \ \ \ \ M e_i = M_i^j e_j \ \ \ \ \ \Rightarrow \ \ \ \ \ (Mx)^j = M_i^j x^i\,, \end{equation} with rows of the matrix $M$ numbered by the upper and columns by the lower indices (if we arrange the set of coordinates of a vector into a column)\,. The composition (or product) of such endomorphisms is obviously associative and explicitly given by \begin{equation} \label{compos} (MN)x \equiv (M\circ N)x \doteq M(Nx) \ \ \ \ \ \Rightarrow \ \ \ \ \ (MN)_i^j = M_k^j N_i^k\,. \end{equation} The set (actually, the vector space) of all endomorphisms $M:V\rightarrow V$ is denoted End$V$\,. \newpage In general, $M$ may send some vectors $x\in V$ to zero. Denote the set of all such vectors by Ker$M$\,, and the total image of $M$ in $V$ by Im$M$\,: \begin{equation} \label{Ker} \text{Ker}M = \{x\in V : Mx = 0\}\,, \ \ \ \ \ \text{Im}M = \{x\in V : x = My\,, \ \ y\in V\}\,. \end{equation} Both Ker$M$ and Im$M$ are subspaces of $V$ (i.e., are closed under addition and multiplication by numbers)\,. Ker$M\neq0$ obviously entails that $M$ is not invertible, whereas Ker$M=0$ ensures (at least in finite-dimensional case) the invertibility of $M$\,. The notions of Ker and Im naturally generalize to linear maps (homomorphisms) $M:V\rightarrow W$ of two different vector spaces. Let now some $P\in \text{End}V$ obey $P\circ P \equiv P^2 = P$ (such endomorphisms are called projectors). Then $V=\text{Im}P\oplus\text{Ker}P$\,. Really, for any $x\in V$ and $Px=y$\,, the formula $x=y+(x-y)$ delivers the appropriate (unique) decomposition. For arbitrary $M:V\rightarrow W$ one only has $V/\text{Ker}M\sim\text{Im}M$\,, where a quotient $V/V'$ of $V$ by its subspace $V'$ is a vector space formed by classes $\{x+V', \ x\in V\}$ (a zero vector of this space being $V'$ itself). We see that, in general, endomorphisms may be not invertible. There is, however, a situation where only invertible maps are allowed: a change from one basis to another one, $e_i\rightarrow \tilde{e}_i=T_i^j e_j$\,. The endomorphism $T$ should be nondegenerate and so invertible: \begin{equation} \label{change} \text{det}\,T \neq 0\,, \ \ \ \ \ e_i = (T^{-1})_i^j\,\tilde{e}_j\,, \ \ \ \ \ x = x^i e_i = \tilde{x}^i\tilde{e}_i \ \ \ \ \ \Rightarrow \ \ \ \ \tilde{x}^i = (T^{-1})^i_j\,x^j\,. \end{equation} Analogously, for any endomorphism $M$ we find \begin{equation} \label{matr} M \tilde{e}_i \doteq \tilde{M}_i^j\,\tilde{e}_j \ \ \ \ \ \Rightarrow \ \ \ \ \tilde{M}_i^j = (T^{-1})^j_k\,M^k_m\,T^m_i\,, \end{equation} or, in more condensed notation, $\tilde{x}=T^{-1}x, \ \tilde{M}=T^{-1}MT$\,. From (\ref{change}), (\ref{matr}) we see that $T$ transforms the lower and $T^{-1}$ the upper indices. This rule persists for higher-rank tensors. Being independent things in a general situation, the upper and lower tensor indices, as well as the whole spaces $V$ and $V^*$, prove to be closely related when there exists a nondegenerate symmetric bilinear form $(\,,)$ on $V$ specified by \begin{equation} \label{scal} (e_i,e_j) = g_{ij} = g_{ji}\,, \ \ \ \ \ \text{det}\,g \neq 0 \ \ \ \ \ \Rightarrow \ \ \ \ \exists\,g^{ij} : \ g_{ik}g^{kj} = g^{jk}g_{ki} = \delta_i^j\,. \end{equation} Then each $f\in V^*$ finds a counterpart $\underline{f}\in V$ (independently of the choice of basis): \begin{equation} \label{under} (\underline{f},x) = \ <\!f,x\!> \ \ \ \ \forall x \in V\,. \end{equation} Explicitly, \begin{equation} \label{identif} \underline{e}^i = g^{ij}e_j\,, \ \ \ \ \ \underline{f} = f_i \underline{e}^i = g^{ij} f_i e_j = (\underline{f})^j e_j\,. \end{equation} Thus, upper and lower indices get, in fact, identified: they can be lowered and raised by $g$ or its inverse. An endomorphism $M^*:V^*\rightarrow V^*$ is called \emph{adjoint} to $M\in \text{End}V$ (and vice versa) if $<\!M^* f,x\!> \,= \,<\!f,Mx\!>$ for any $x\in V, \,f\in V^*$\,. With $\underline{M}^*\in \text{End}V$ defined through $\underline{M}^*\underline{f}\doteq\underline{M^*f}$\,, one finds \begin{equation} \label{} M^* e^i = M^i_j e^j\,, \ \ \ \ \ \underline{M}^* e_j = (\underline{M}^*)^i_j e_i\,, \ \ \ \ \ (\underline{M}^*)^i_j = g^{im} g_{jk} M^k_m\,. \end{equation} So, the matrices of adjoint endomorphisms are identical when in their native bases, whereas when adjusted to the same space ($V$) they become transpose of each other. \newpage \begin{center} \large\textbf{Algebras} \end{center} \vspace{.1cm} \emph{Algebra} $A$ is a vector space equipped with a bilinear map $m:A\otimes A\rightarrow A$ (multiplication)\,, which usually obeys some extra properties like commutativity $ab=ba$\,, associativity (see below)\,, etc. Given a basis in $A$\,, the map $m$ is specified by a set of numerical structure constants, through the multiplication table \begin{equation} \label{table} m(e_i\otimes e_j) \equiv e_i e_j = C^k_{ij}\,e_k\,. \end{equation} Infinite-dimensional algebras are usually described in terms of (a finite number of) generators and relations, rather than of its basis. Given two algebras $A$ and $A'$\,, one can supply the vector space $A\oplus A'$ with the algebra structure by specifying products of elements from different summands. If all such products are set to zero, $A\oplus A'$ is called a direct sum of algebras. The tensor product space $A\otimes A'$ is made an algebra (tensor product of algebras $A$ and $A'$) through the following postulate: \begin{equation} \label{tensprod} (a\otimes b)\,(c\otimes d) = ac\otimes bd\,. \end{equation} An algebra $A$ may contain a unity element $\mathbf{1}$ which is characterized by $\mathbf{1}a=a\mathbf{1}=a$\,. Such element is unique (if exists) and, generally, has nothing in common with the number $1\in \mathbb{K}$ (except when $A=\mathbb{K}$)\,. A map $f:A\rightarrow A'$ is called a \emph{homomorphism} of the two algebras if it respects linearity and multiplication (and the unity, if any)\,: \begin{equation} \label{hom} f(\alpha a + \beta b) = \alpha f(a) + \beta f(b)\,, \ \ \ \ \ f(ab) = f(a) f(b)\,, \ \ \ \ \ f(\mathbf{1}) = \mathbf{1}'\,. \end{equation} The two most important classes of algebras are associative algebras (with unity) and Lie algebras. Associativity means \begin{equation} \label{assoc} (ab)\,c = a(bc) \doteq abc \ \ \ \ \ \ \Leftrightarrow \ \ \ \ \ C^n_{ij}\,C^m_{nk} = C^m_{in}\,C^n_{jk}\,. \end{equation} A standard example of an associative (but generally noncommutative) algebra with unity is provided by End$V$ (\ref{compos})\,: composition of endomorphisms plays the role of associative product, and the identity map \,$\text{id}:V\rightarrow V$ -- of unity. A Lie algebra $\mathfrak{g}$ (with commutator $[\,,\,]$ instead of a product) is anticommutative, \begin{equation} \label{Lie1} [e_i, e_j] = C^k_{ij}\,e_k\,, \ \ \ \ \ C^k_{ij} = -C^k_{ji} \ \ \ \ \ \Leftrightarrow \ \ \ \ \ [x,y] = -[y,x]\,, \end{equation} has no unity, and exhibits the Jacobi property instead of associativity: \begin{equation} \label{Lie2} [x,[y,z]] + [y,[z,x]] + [z,[x,y]] = 0 \ \ \ \ \ \Leftrightarrow \ \ \ \ \ C^m_{in}\,C^n_{jk} + C^m_{jn}\,C^n_{ki} + C^m_{kn}\,C^n_{ij} = 0\,. \end{equation} An algebra homomorphism $\rho:A\rightarrow\text{End}V$ is called a \emph{representation} of the associative algebra $A$ on the vector space $V$\,. In other words, algebra $A$ is realized by linear operators on $V$, or algebra $A$ acts on $V$ \ ($A\,\triangleright V$)\,. All this means that $\rho$ associates a linear operator $\rho(a)\equiv T_a\in\text{End}V$ with any $a\in A$ according to the rules \begin{equation} \label{repr} \rho(\alpha a + \beta b) \equiv T_{\alpha a + \beta b} = \alpha T_a + \beta T_b\,, \ \ \ \ \ T_{ab} = T_a\circ T_b\,, \ \ \ \ \ T_{\mathbf{1}} = \mathbf{1}\,. \end{equation} Associative multiplication (from the left) naturally induces the action of $A$ on $A$ itself (the \emph{regular} representation)\,: \begin{equation} \label{reg} T_a\,b \doteq ab\,, \ \ \ \ \ \ T_{e_i}\,e_j = e_i e_j = C^k_{ij}\,e_k \ \ \ \ \ \Rightarrow \ \ \ \ (T_{e_i})^k_j = C^k_{ij}\,, \end{equation} with matrix elements of the basic operators $\rho(e_i)=T_{e_i}$ given by original structure constants of $A$\,. %\newpage To introduce representations of Lie algebras, we exploit the fact that any associative algebra can be endowed with the Lie structure as follows: $[a,b]=ab-ba$\,. Having this in mind for End$V$, one can demand $\rho:\mathfrak{g}\rightarrow\text{End}V$ to be a Lie algebra homomorphism, \begin{equation} \label{repLie} \rho(x) \equiv t_x\in\text{End}V\,, \ \ \ \ \ t_{[x,y]} = [t_x, t_y] = t_x\circ t_y - t_y\circ t_x\,. \end{equation} It is an immediate consequence of (\ref{Lie2}) that there exists a direct analog of the regular representation (\ref{reg}), namely, an \emph{adjoint} representation of a Lie algebra $\mathfrak{g}$ on itself\,: \begin{equation} \label{adj} \text{ad}_x\,y \doteq [x,y]\,, \ \ \ \ \ \ \ \ (\text{ad}_{e_i})^k_j = C^k_{ij}\,. \end{equation} These structure constants also serve in a \emph{coadjoint} representation $\text{ad}^*$ of $\mathfrak{g}$ on its dual~$\mathfrak{g}^*$, \begin{equation} \label{coadj} <\!\text{ad}^*_x f\,,\,y\!> \ \doteq \ <\!f\,,\,\text{ad}_x y\!> \ = \ <\!f\,,\,[x,y]\!>\,, \ \ \ \ \ \ \text{ad}^*_{e_i} e^k = C^k_{ij}\,e^j \ \ \ \ \ \ (f\in\mathfrak{g}^*)\,, \end{equation} which is actually antirepresentation due to the property \ $\text{ad}^*_{[x,y]} =\text{ad}^*_y\circ\text{ad}^*_x-\text{ad}^*_x\circ\text{ad}^*_y$\,. \vspace{.1cm} We see by the way that a mere product $xy$\,, though not allowed in a Lie algebra itself, virtually appear, under the guise of associative multiplication of endomorphisms, in all its representations. This observation is of conceptual nature: we actually can define an associative algebra $U_{\mathfrak{g}}$ (the \emph{universal enveloping} of $\mathfrak{g}$)\,, which has precisely the same mathematical content as the Lie algebra $\mathfrak{g}$ (and, in particular, the same representations)\,. In explicit terms, $U_{\mathfrak{g}}$ is infinite-dimensional (even for finite-dimensional $\mathfrak{g}$) associative algebra, with unity, over the set of generators $\{e_i\}$ which obey (the only) constraints $e_i e_j - e_j e_i = C^k_{ij}\,e_k$\,, obviously inherited from $\mathfrak{g}$\,. It is spanned by $\mathbf{1}$ and monomials in $e_i$\,, some of them being identified through the above constraints. For instance, the basis of $U_{\mathfrak{g}}$ may include $\mathbf{1},e_i$ and, among infinitely many others, also $e_1 e_2$ (say)\,, but not $e_2 e_1$\,, due to $e_2 e_1 - e_1 e_2 = C^k_{21}\,e_k$\,. The Poincare-Birkhoff-Witt theorem describes the basis of $U_{\mathfrak{g}}$ explicitly, as consisting of ordered products of nonnegative powers of generators $e_i$ (the empty monomial corresponds to $\mathbf{1}$)\,: \begin{equation} \notag e_1^{m_1}\,e_2^{m_2}\,\ldots \ \ \ \ \ \ \ (m_i\geqslant0) \end{equation} Structure constants of $U_{\mathfrak{g}}$ can also be found (in principle) with the use of the basis and the constraints above. However, they are clearly not very convenient to deal with; using generators is preferable in most situations. \newpage \begin{center} \large\textbf{Hopf algebras} \end{center} \vspace{.1cm} A Hopf algebra $A$ is an associative algebra with unity, equipped with three extra operations: \emph{coproduct} (or \emph{comultiplication}) $\Delta:A\rightarrow A\otimes A$\,, \emph{counit} $\varepsilon:A\rightarrow\mathbb{K}$\,, and \emph{antipode} $S:A\rightarrow A$\,, subject to some axioms. Let us begin with coproduct. It must obey \begin{equation} \label{coprod} \Delta(ab) = \Delta(a)\,\Delta(b)\,, \ \ \ \ \ \Delta(\mathbf{1}) = \mathbf{1}\otimes\mathbf{1}\,, \ \ \ \ \ (\Delta\otimes\text{id})\circ\Delta = (\text{id}\otimes\Delta)\circ\Delta \doteq \Delta^2\,. \end{equation} The last equality is known as the \emph{coassociativity} condition\,. It is dual to associativity in the following sense. On a vector space $A^*$ dual to an associative algebra $A$\,, the coassociative coproduct $\Delta:A^*\rightarrow A^*\otimes A^*$ is naturally induced by the requirement \begin{equation} \label{A*} <\!\Delta(f)\,,\ a\otimes b\!> \ = \ <\!f,ab\!> \ \ \ \ \ \Leftrightarrow \ \ \ \ \ \Delta(e^k) = C^k_{ij}\,e^i\otimes e^j\,, \end{equation} where multiplication structure constants of $A$ serve as the comultiplication ones in $A^*$\,. However, in a Hopf algebra both operations are assumed to have their own (co)associative structure constants, additionally constrained by the first equality in (\ref{coprod})\,: \begin{equation} \label{(co)prod} e_i e_j = C^k_{ij}\,e_k\,, \ \ \ \ \Delta(e_i) = \Delta_i^{jk}e_j\otimes e_k\,, \ \ \ \ C^k_{ij}\,\Delta_k^{rs} = C^r_{mp}\,C^s_{nq}\,\Delta_i^{mn}\Delta_j^{pq}\,. \end{equation} Coproduct is called \emph{cocommutative} if \, $\sigma\!\circ\Delta=\Delta \ \ \Leftrightarrow \ \ \Delta_i^{jk}=\Delta_i^{kj}$\,, where $\sigma$ swaps tensor multipliers: $\sigma(v\otimes w)=w\otimes v$\,. The principal destination of a coproduct is to transmit the algebra action to tensor products of representations as follows: $a \ \rightarrow \ \Delta(a)\triangleright(V\otimes W)$\,. The axioms involving counit are \begin{equation} \label{counit} \varepsilon(ab) = \varepsilon(a)\,\varepsilon(b)\,, \ \ \ \ \ \varepsilon(\mathbf{1}) = 1\,, \ \ \ \ \ (\varepsilon\otimes\text{id})\circ\Delta(a) = (\text{id}\otimes\varepsilon)\circ\Delta(a) = a\,. \end{equation} At last, an antipode brings in five more postulates: \begin{equation} \label{antip1} S(ab) = S(b)S(a)\,, \ \ \ \ S(\mathbf{1}) = \mathbf{1}\,, \ \ \ \ \varepsilon\circ S = \varepsilon\,, \ \ \ \ (S\otimes S)\circ\Delta(a) = \sigma\circ\Delta\circ S(a)\,, \end{equation} including the most helpful one, \begin{equation} \label{antip2} m\circ(S\otimes\text{id})\circ\Delta(a) = m\circ(\text{id}\otimes S)\circ\Delta(a) = \varepsilon(a)\,\mathbf{1}\,, \end{equation} which usually provides a good practical recipe for evaluating the antipode. Complementing (\ref{A*}) with \begin{equation} \label{dualHopf} <\!fg\,,\ a\!> \ = \ <\!f\otimes g\,,\ \Delta(a)\!>\,, \ \ \ \ \ \varepsilon(f)= \ <\!f,\mathbf{1}\!>\,, \ \ \ \ \ <\!S(f),a\!> \ = \ <\!f,S(a)\!>\,, \end{equation} and treating the counit of a Hopf algebra $A$ as the unity in $A^*$\,, one thus establishes the Hopf algebra structure on $A^*$\,. In particular, commutative product dualizes to cocommutative coproduct, and vice versa. A widely known example of a dual pair of Hopf algebras is provided by $U_{\mathfrak{g}}$ and a (commutative Hopf) algebra of functions on the Lie group $G$ corresponding to $\mathfrak{g}$\,. To make $U_{\mathfrak{g}}$ a (cocommutative) Hopf algebra it suffices to postulate \begin{equation} \label{HopfUg} \Delta(e_i) = e_i\otimes 1 + 1\otimes e_i\,, \ \ \ \ \ \ \varepsilon(e_i) = 0\,, \ \ \ \ \ \ S(e_i) = -e_i \end{equation} on the generators (coproduct of this type is called \emph{primitive})\,. Conversely, numerical coordinate functions on a (matrix) group, $g_{ij}\equiv(a_{ij},g)\,, \ (gh)_{ij}=g_{ik}h_{kj}\,, \ g,h\in G$\,, form a commutative Hopf algebra with pointwise multiplication, \begin{equation} \label{G} (a_{ij}\,a_{mn},g) = (a_{mn}\,a_{ij},g) = g_{ij}\,g_{mn}\,, \ \ \ \ \Delta(a_{ij}) = a_{ik}\otimes a_{kj}\,, \ \ \ \ \varepsilon(a_{ij}) = \delta_{ij}\,, \end{equation} whose Hopf properties originate from the duality-like conditions \begin{equation} \label{group} (\Delta(a_{ij}),g\otimes h) = (a_{ij},gh)\,, \ \ \ \ \varepsilon(a_{ij}) = (a_{ij},\mathbf{1})\,, \ \ \ \ (S(a_{ij}),g) = (a_{ij}, g^{-1})\,. \end{equation} There exists a general statement (Milnor-Moore theorem) that any commutative Hopf algebra is dual to $U_{\mathfrak{g}}$ for some Lie algebra $\mathfrak{g}$\,. In this respect, (co)commutative Hopf algebras seem to be of inadequate complexity. Examples of genuine Hopf algebras are related to quantum groups. \newpage Consider now a vector space Hom$(A,A')$ of the algebra homomorphisms from a Hopf algebra $A$ to an algebra $A'$\,. On this space, a bilinear operation called a \emph{convolution} product can be introduced as follows ($\phi,\psi\in\text{Hom}(A,A')$)\,: \begin{equation} \label{convol} (\phi*\psi)(a) \doteq m\circ(\phi\otimes\psi)\circ\Delta(a) \equiv \phi(a_{(1)})\,\psi(a_{(2)})\,. \end{equation} Here we are using a self-evident notation for the coproduct, \begin{equation} \label{Sw} \Delta(a) = a_{(1)}\otimes a_{(2)}\,, \ \ \ \ \ \ \Delta^2(a) = a_{(1)}\otimes a_{(2)}\otimes a_{(3)}\,, \ \ \ \ldots \ \ . \end{equation} The convolution algebra is evidently associative, \begin{equation} \label{} ((\phi*\psi)*\chi)(a) = (\phi*(\psi*\chi))(a) = \phi(a_{(1)})\,\psi(a_{(2)})\,\chi(a_{(3)})\,, \end{equation} with a map $\iota(a)=\varepsilon(a)\mathbf{1'}$ playing, due to (\ref{counit}), the role of unity, $\phi*\iota=\iota*\phi=\phi$\,: \begin{equation} \label{} (\phi*\iota)(a) = \phi(a_{(1)})\,\varepsilon(a_{(2)})\,\mathbf{1} = \phi(a_{(1)}\varepsilon(a_{(2)})) = \phi(a)\,. \end{equation} Let us now consider the case $A'=A$\,. Axiom (\ref{antip2}) shows that the antipode and identity maps are mutually inverse in the convolution algebra, $S*\text{id}=\text{id}*S=\iota$\,: \begin{equation} \label{id} (S*\text{id})(a) = S(a_{(1)})\,a_{(2)} = \varepsilon(a)\,\mathbf{1} = \iota(a)\,, \ \ \ \ \ (\text{id}*S)(a) = a_{(1)}\,S(a_{(2)}) = \iota(a)\,. \end{equation} By the way, this demonstrates the uniqueness of antipode. Other useful corollary is obtained via the following observation. In either commutative or cocommutative case, \begin{equation} \label{} (S^2*S)(a) = S^2(a_{(1)})\,S(a_{(2)}) = S(a_{(2)}S(a_{(1)})) = \varepsilon(a)S(\mathbf{1}) = \varepsilon(a)\,\mathbf{1} = \iota(a)\,, \end{equation} that implies $S^2=\text{id}$ \,owing to (\ref{id}). In a general Hopf algebra this is not necessarily true: $S^{-1}$ exists but does not coincide with $S$\,. \end{document}