Delay Invariant Convolutional Network Codes
Qifu Tyler Sun, Sidharth Jaggi, and ShuoYen Robert Li
Institute of Network Coding
The Chinese University of Hong Kong
Hong Kong SAR, China
qfsun@inc.cuhk.edu.hk, {jaggi, bobli}@ie.cuhk.edu.hk
This work is supported by AoE grant E02/08 from the University Grants Committee of the Hong Kong Special Administration Region, China.
Abstract—In this work, we define delay invariant convolutional network codes which guarantee multicast communication at asymptotically optimal rates in networks with arbitrary delay patterns. We show the existence of such a code over every symbol field. Moreover, the code can be constructed with high probability when coding coefficients are independently and uniformly chosen from a sufficiently large set of coding operations. On the other hand, if the symbol field is no smaller than the number of receivers, we devise a method to efficiently construct a delay invariant convolutional network code with scalar coding coefficients.
Introduction
Convolutional network coding (See , , and ) is a form of linear network coding which deals with a pipeline of messages as a whole rather than individually. It allows coding over the combined spacetime domain. Over a network with cycles, the propagation and encoding of sequential data symbols may convolve together. Thus the propagation delay becomes an essential factor in network coding and hence convolutional network coding is naturally adopted to ensure causal data propagation around cycles. Under the formulation of propagation delay in , the transmission delay over channels is absorbed into processing delay at nodes. The node processing delay, on the other hand, is measured on every adjacent pair (d, e) of channels, where d and e are, respectively, the incoming and outgoing edges of a node. The delay over (d, e), which is a nonnegative integer power of D, represents the latency from the time a data symbol is received from d till the time it is first incorporated into the information sent onto e. Consequently, when data is propagated over the network via a convolutional network code (CNC), a delay may either be introduced by the network, or inserted intentionally by the code as part of the coding operation. As exemplified in , as long as there is a positive delay along every cycle in the network, data propagation in a causal manner can be assured over each cycle by a CNC.
In order to guarantee causality, a CNC constructed by existing algorithms, such as the deterministic one and the random one , is optimal (in the sense that achieves asymptotically optimal data transmission rate) with respect to a certain delay pattern. In this paper, after reviewing some fundamentals on convolutional network coding in Section , we introduce a new class of CNCs, called delay invariant CNCs (DICNCs), in a multicast network in Section . A DICNC is optimal in the presence of any delays in the network. In this way, the code can be constructed without the knowledge of network delays. Moreover, once it is deployed into the network, its optimality will not be affected by any delay changes brought about by inappropriate synchronization or other issues. Even though there are infinitely many (integer) delay patterns, we show that every fieldbased optimal linear network code actually qualifies as a DICNC. Moreover, we shall further show that a DICNC exists over every symbol field, and random coding actually suffices to construct one with high probability. On the other hand, in Section , when the symbol field is no smaller than the number of receivers, a variation of the technique in will be proposed to adapt the acyclic algorithm of for the deterministic construction of a DICNC with scalar coding operations.
Fundamentals on Convolutional Network Coding
Conventions. We shall adopt the following convention unless otherwise specified.
A network means a finite directed multigraph that contains a unique node s with no incoming edges. This node is called the source. No edge loops around a node. The edge set is denoted by E. Every edge represents a noiseless transmission channel of unit capacity.
For every node v, denote by In(v) and Out(v), respectively, the sets of its incoming and outgoing edges. An ordered pair (d, e) of edges is called an adjacent pair when there is a node v such that d In(v) and e Out(v).
Outgoing edges from s are called datagenerating edges. Abbreviate Out(s) as , which represents the (fixed) data generating rate from the source. Assume an ordering on edges in E led by datagenerating edges.
A sink means a nonsource node v to which there are edgedisjoint paths from s. The number of sinks in the network will be denoted by .
Let denote a finite field, which represents the symbol alphabet, and _{q} the finite field with q elements. Let P be a PID, which represents the general ensemble of data units. In conventional network coding, P = and in convolutional network coding, P = [(D)].
In a network, a Plinear network code (PLNC), denoted by (k_{d}_{,}_{e}), assigns a coding coefficient k_{d}_{,}_{e} P to every pair (d, e) of edges such that k_{d}_{,}_{e} = 0 when (d, e) is not an adjacent pair. An CNC means an [(D)]LNC (k_{d}_{,}_{e}) with k_{d}_{,}_{e} [D] [(D)].
A PLNC is said to be normal if it determines a unique set of coding vectors. Normality of a LNC is a prerequisite for the notion of data propagation via the code. In a network with cycles, not every PLNC is normal. A sufficient condition stated in for normality of a PLNC (k_{d}_{,}_{e}) is that det(I_{}_{E}_{}_{ } [k_{d}_{,}_{e}]_{d}_{,}_{e}_{Out(}_{s}_{)}) is a unit in P, where [k_{d}_{,}_{e}]_{d}_{,}_{e}_{Out(}_{s}_{)} is a square matrix with rows and columns indexed by E\Out(s). The conditions for a CNC to be normal are discussed in and . In particular, a causal CNC is normal.
Definition . A delay function t in a network assigns a nonnegative integer to every adjacent pair (d, e) such that, along every cycle, there is at least one pair (d, e) with t(d, e) > 0. An CNC is said to be tcausal if the coding coefficient for every adjacent pair (d, e) is divisible by z^{t}^{(}^{d}^{, }^{e}^{)}.
In an acyclic network, every LNC is naturally a tcausal CNC with scalar coding coefficients in [D], where t is a zero function.
Proposition (). A causal CNC is normal.
Example. For succinctness, all illustrations of networks in the paper will omit the source node s. Figure depicts the coding coefficients (k_{d}_{,}_{e}) of an _{2}CNC in the Shuttle Network . The code is not normal. However, given an arbitrary delay function t, the tcausal _{2}CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) becomes normal.
Definition . A normal PLNC with the coding vectors f_{e} is called a Plinear multicast when rank_{P}(f_{e}: eIn(v)) = for each sink v. A linear multicast is optimal in the sense that all sinks can get enough information to recover the source message.
Delay Invariant Convolutional Network Codes
Figure . An ��_{2}CNC in the Shuttle Network is prescribed by coding coefficients k_{d}_{,}_{e}. It is not normal, because there does not exist a set of coding vectors for (k_{d}_{,}_{e}). However, given an arbitrary delay function t, the _{2}CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) becomes normal. For instance, if t assigns delay one to (e_{7}, e_{3}), (e_{9}, e_{5}), (e_{10}, e_{4}), and delay zero to all others, the coding vectors of the tcausal code (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) are f_{e}_{1} = f_{e}_{3 }= (1 0)^{T}, f_{e}_{2} = f_{e}_{10 }= (0 1)^{T}, f_{e}_{4} = f_{e}_{5 }= f_{e}_{6} =, f_{e}_{7} = f_{e}_{8 }= f_{e}_{9} =. Consequently, it qualifies as a delay invariant ��_{2}CNC.
A causal convolutional multicast is always affiliated with some delay function t. Therefore, the function t is always known in advance for code construction. However, when t is changed due to some unexpected reason, the already deployed convolutional multicast is not necessarily optimal anymore. For instance, consider the network depicted in Figure , and a delay function t which assigns delay one to (e_{5}, e_{3}), (e_{8}, e_{4}), (e_{12}, e_{6}), (e_{13}, e_{8}), (e_{9}, e_{10}), and delay zero to all others. If the efficient algorithm_{ }of is adopted to construct a tcausal convolutional multicast (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}), a possible resulting code is prescribed in Figure by nonzero k_{d}_{,}_{e}. In particular, the coding vectors for e_{4} and e_{12} are, respectively, and . However, if the delay function t is changed to assign delay one to (e_{6}, e_{7}) and delay zero to (e_{8}, e_{4}), then the new tcausal CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is no longer a convolutional multicast, because the coding vectors for e_{4} and e_{12} become identical, i.e., , and thus sink t_{1} cannot reconstruct both data streams. Therefore, under the new function t, the code (k_{d}_{,}_{e}) has to be redesigned in order to make it optimal. Hence, we propose a type of CNC that is optimal with respect to an arbitrary delay function.
Definition . An CNC (k_{d}_{,}_{e}) is called a delay invariant CNC (DICNC) if for any delay function t, the code (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is a tcausal convolutional multicast.
As an example, the _{2}CNC depicted in Figure is a DI_{2}CNC. One of the merits of DICNC is that the code design is independent of delay functions. One possible design of a DICNC is to construct an linear multicast.
Proposition . Every linear multicast is a DICNC.
Proof. Let (k_{d}_{,}_{e}) be an linear multicast with coding vectors f_{e}. Given a delay function t, the tcausal CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is normal by Proposition . Let f_{e} be coding vectors determined by (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}). Define a homomorphism m that projects [(D)] onto by mapping D to 1. Applying componentwise, m extends to a mapping from [(D)]^{} to ^{}. The code (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is an convolutional multicast because for every sink v,
(f_{e}:eIn(v))≥(m(f_{e}): eIn(v))
= (f_{e}: eIn(v)) = . ∎
Even though a source always generates a time sequence of messages to be propagated over a network, when the network is acyclic, a LNC is always assumed to ignore the issue of data communication delay and deal with each single message individually. One reason is that upstreamtodownstream ordering of nodes enables buffering, pipelining, and resulting concerted synchronization among all nodes so that the encoding and transmission of a message is independent of sequential messages. This facilitates the model of zero delay function. From another perspective justified by Proposition , even if nonzero network delays are taken into account, the design of a fieldbased optimal LNC still suffices to induce an optimal LNC in the combined spacetime domain.
Proposition guarantees the existence of a DICNC when is sufficiently large. In fact, we can say more.
Theorem . In any network, there exists a DICNC, with every polynomial coding coefficient of degree at most O(E^{2}log_{p}E), where p is the characteristic of .
Proof. The key is to establish a homomorphism from [D] onto an appropriate extension field of such that infinitely many delays can be mapped to a finite number of values in . It then suffices to show the existence of a certain LNC which can induce a DICNC. See Appendix for details. ∎
(a) (b)
Figure . (a) The given network contains a cycle. (b) The associated acyclic network in which the bottom layer nodes qualify as sinks. Every ��linear multicast on subject to (1) induces a delay invariant convolutional multicast on via .
Corollary . Consider an CNC with coding coefficients k_{d}_{,}_{e} independently and uniformly selected from those polynomials in [D] with degrees smaller than d. When p^{d} > (2(p+1)d)^{}^{E}^{}^{2}, where p is the characteristic of , the probability for the code to be a DICNC is at least .
Proof. See Appendix. ∎
In comparison, random coding constructs a causal _{q}convolutional multicast with respect to a delay function t at probability at least when q^{d }> .
Efficient Deterministic Code Construction
Although DICNCs can be generated by random coding, coding coefficients have to be chosen from a sufficiently large set of polynomials so as to guarantee high successful construction probability. Thus, encoding complexity may become very high. In this section, we shall demonstrate that when  ≥ , which is the number of sinks in a network , the deterministic acyclic algorithm of suffices to construct a DICNC with scalar coding coefficients, such that the coding complexity becomes much lower:
When is acyclic, the algorithm of directly constructs an linear multicast, which is naturally a DICNC by Proposition . The computational complexity of the algorithm is O(E(+)).
When contains cycles, we can associate it with an acyclic network of sinks via the algorithm below. An linear multicast subject to a straightforward condition (1) on can then be constructed by the algorithm of , which in turn induces a DICNC on via (2). It takes O(E^{3}) to construct a DICNC.
Algorithm. Given a network , the associated acyclic network consists of nodes in five layers, which are labeled 0 to 4 from upstream to downstream. The notation for a node carries a subscript indicating the layer. All edges are between adjacent layers.
Layer 0 consists of just the source node.
Corresponding to each edge e E\Out(s), there is a layer1 node e_{1} and an edge e_{(1)} from s to e_{1}. Thus there are E– datagenerating edges.
Corresponding to each edge e E, there is a node e_{2}. When e E\Out(s), there is also an edge e_{(2)} from e_{1} to e_{2}. Corresponding to every adjacent pair (d, e) in , there is an edge_{ } _{ }from e_{1} to d_{2}.
Corresponding to each edge e E, there is a node e_{3} as well as an edge e_{(3)} from e_{2} to e_{3}.
Figure . Under the delay function t that assigns delay one to (e_{5}, e_{3}), (e_{8}, e_{4}), (e_{12}, e_{6}), (e_{13}, e_{8}), (e_{9}, e_{10}), and delay zero to all others, the algorithm of can construct an ��CNC (k_{d}_{,}_{e}), which is depicted by nonzero coding coefficients. The code (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is a tcausal convolutional multicast. Nevertheless, (k_{d}_{,}_{e}) is not a DICNC. This is because when t is changed to assign delay one to (e_{6}, e_{7}) and delay zero to (e_{8}, e_{4}), the new tcausal CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) has identical coding vectors for e_{4} and e_{12}, such that sink t_{1} cannot reconstruct both data streams.
Corresponding to each sink v in , there is a node v_{4}. Arbitrarily take edgedisjoint paths in that lead from datagenerating edges to v. For every e E, install an edge from e_{3} to v_{4} unless e In(v) and is an edge on these paths.
The association of with is illustrated by Figure . The reference proved that every layer4 node in is a sink. Moreover, it justified that the algorithm of is able to construct an linear multicast on subject to the following
(1)The coding coefficient is either 0 or 1 for every adjacent pair in the form of (e_{(1)}, x). The coding coefficient is 1 for every adjacent pair in the form of (e_{(1)}, e_{(2)}) or (e_{(2)}, e_{(3)}).
Proposition . Every linear multicast (k_{x}_{,}_{y}) on subject to (1) induces a DICNC (k_{d}_{,}_{e}) on via
Proof. Let t be an arbitrary delay function on , and t a delay function on such that t(x, y) = t(d, e) if x =_{ } _{ }and y = d_{(3)} for some adjacent pair (d, e) in . Else, t(x, y) = 0. According to Proposition , the code (k_{x}_{,}_{y}D^{t}^{(}^{x}^{,}^{y}^{)}) is a tcausal convolutional multicast. Let f_{e} denote the coding vector of edge e_{(3)} for the code (k_{x}_{,}_{y}D^{t}^{(}^{x}^{,}^{y}^{)}) on . Note that [f_{e}]_{e}_{}_{E}_{\Out(}_{s}_{)} = _{}_{}_{}_{}[k_{d}_{,}_{e}]_{d}_{,}_{e}_{In(}_{s}_{)}. Since the CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) on is tcausal, it is also normal by Proposition , which implies det(_{}_{}_{}_{}[k_{d}_{,}_{e}]_{d}_{,}_{e}_{In(}_{s}_{)}) ≠ 0. As a result, (f_{e}: e E\Out(s)) = , and hence Theorem 20 in may be applied to prove (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) in to be a tcausal convolutional multicast on . ∎
Summary
The coding coefficients k_{d}_{,}_{e} of a convolutional network code (CNC) are always selected to be polynomials over a symbol field. When they are implemented in the network, the effective coding coefficients are in fact k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}, where t is a delay function on adjacent pairs of edges modeling the combination of any data propagation delays. In this paper, we introduce the concept of delay invariant CNCs (DICNCs), which are optimal with respect to any delays. In particular, a CNC (k_{d}_{,}_{e}) is said to be delay invariant if associated with any delay function t, the resultant CNC (k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}) is always optimal, i.e., a convolutional multicast. The main result in this paper shows the existence of a DICNC over any symbol field, and a random coding technique that can produce a DICNC with high probability when coding coefficients are selected from sufficiently many polynomials. Moreover, when the symbol field is no smaller than the number of sinks, a variation of the technique in can adapt the acyclic algorithm of for the construction of a DICNC with scalar coding coefficients. The DICNC constructed by the deterministic algorithm thus has much lower encoding complexities compared with the one generated by random coding. However, even in an acyclic network, it is still unclear whether there is a polynomial time algorithm for DICNC construction over a smaller symbol field, for instance the binary field. Any investigation along this line is very intriguing.
Acknowledgment
The authors appreciate helpful discussions with Pak Hou Che, Chung Chan and Xunrui Yin.
References
R. Alshwede, N. Cai, S.Y. R. Li and R. W. Yeung, “Network information flow,” IEEE Trans. Inf. Theory, vol. 46, pp. 12041216, Feb., 2000.
N. Cai, W. Guo, “The conditions to determine convolutional network coding on matrix representation,” in Netcod, Lausanne, Switzerland, Jun., 2009.
E. Erez, M. Feder, “Efficient Network Code Design for Cyclic Networks,” IEEE Trans. Inf. Theory, vol. 56, no. 8, pp. 38623878, Aug., 2010.
C. Fragouli and E. Soljanin, “A connection between network coding and convolutional codes,” Proc. Int. Conf. Commun. (ICC), Paris, France, Jun. 2004, vol. 2, pp. 661666.
T. Ho, M. Médard, R. Koetter, D. R. Karger, M. Effros, J. Shi, B. Leong, “A random linear network coding approach to multicast,” IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 44134430, Oct., 2006.
S. Jaggi, M. Effros, T. Ho, and M. Médard, “On linear network coding,” 42nd Allerton Conf. Commun. Control and Comput., Monticello, IL, Sep. 2004.
S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain, and L. Tolhuizen, “Polynomial time algorithms for multicast network code construction,” IEEE Trans. Inf. Theory, vol. 51, no. 6, pp. 19731982, Jun., 2005.
R. Koetter and M. Médard, “An algebraic approach to network coding,” IEEE/ACM Trans. Netw., vol. 11, No. 5, pp. 782795, Oct., 2003.
S.Y. R. Li, R. W. Yeung, and N. Cai, “Linear network coding,” IEEE Trans. Inf. Theory, vol. 49, no. 2, pp. 371381, Feb., 2003.
S.Y. R. Li and R. W. Yeung, “On convolutional network coding,” Proc. of ISIT, pp. 17431747, Seattle, USA, July, 2006.
S.Y. R. Li and Q. T. Sun, “Network coding theory via commutative algebra,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 403415, Jan. 2011.
S.Y. R. Li and Q. T. Sun, “Linear network coding: theory and algorithms,” Proc. of the IEEE, vol. 99, no. 3, Mar. 2011.
R. W. Yeung, S.Y. R. Li, N. Cai, and Z. Zhang, Network Coding Theory. Boston, MA: Now Publishers, 2006.
Appendix. Proof of Theorem and
Convention. For a normal PLNC (k_{d}_{,}_{e}), we adopt the abbreviations
A_{k} = [k_{d}_{,}_{e}]_{d}_{In(}_{s}_{),}_{e}_{In(}_{s}_{)}, B_{k} = _{}_{}_{}_{} [k_{d}_{,}_{e}]_{d}_{,}_{e}_{In(}_{s}_{)}.
Then, the coding vector f_{e} for e E\In(s) can be represented by
f_{e} = ,
where Adj() means the adjugate of a square matrix, and u_{e} an (E)dim column vector indexed by E\In(s) such that the entry labeled by e is 1 and all others are 0.
For a subfield _{p} of and an element , denote by _{p}() the smallest subfield of containing _{p} and . ∎
Now associate every adjacent pair (d, e) with an indeterminate x_{d}_{,}_{e}. Let [D][] denote the polynomial ring in these indeterminates over [D]. Write x_{d}_{,}_{e} = 0 when (d, e) is not an adjacent pair. For every sink v, select edgedisjoint paths from datagenerating edges to distinct edges belonging to In(v). Denote by P_{v} the juxtaposition of u_{e}, where e is among the said distinct edges in In(v). Moreover, given a delay function t, let M_{v}(t) denote the EE matrix
.
In order to qualify an CNC (k_{d}_{,}_{e}) as a DICNC, it suffices to show that

Given any delay function t, the evaluation of det(M_{v}(t)) at x_{d}_{,}_{e} = k_{d}_{,}_{e} is not 0 [D] for all sinks v.
To justify this, consider the tcausal CNC (h_{d}_{,}_{e}), where h_{d}_{,}_{e} = k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)}. By Proposition , the code is normal, and hence det(B_{h}) ≠ 0 and det(Adj(B_{h})) ≠ 0. Moreover, similar to the proof of Lemma 1 in , because
if ≠0, then det(A_{h}Adj(B_{h})P_{v})≠0, and thus
rank_{��}_{[(}_{D}_{)]}(f_{e}: eIn(v)) = rank_{��}_{[(}_{D}_{)]}(A_{h}Adj(B_{h})[u_{e}]_{e}_{In(}_{v}_{)}) = .
First take = _{2} and let m be the integer such that 3^{m} ≥ E^{2}log_{2}E > 3^{m}^{–1}, where is a constant subject to Define a homomorphism : [D][][] that fixes as well as indeterminates x_{d}_{,}_{e}, and maps D to , where is an element in of order and _{2}() = . Lemma asserts the existence of . Applying componentwise, the homomorphism also extends to a mapping from matrices over [D][] to matrices over . Because has order , for each sink v,
({M_{v}(t): t is a delay function}) < .
Consequently, the highest degree of every indeterminate in the polynomial _{t}_{:delay function}(det(M_{v}(t))) is less than . Therefore, when a subset F has more than elements, there exist values k_{d}_{,}_{e} F such that the evaluation of
_{v}_{: sink}_{t}_{: delay function}(det(M_{v}(t)))
at x_{d,e} = k_{d,e} is not 0 . Since
_{ }
_{ }
the choice of such F is possible. Let P denote the set of those polynomials in [D] that have degrees smaller than 23^{m} = O(E^{2}log_{2}E). Since _{2}() = , the minimal polynomial of over has degree 23^{m}. Therefore, (P) = , and the _{2}CNC with coding coefficients k_{d}_{,}_{e} = P^{ –1}(k_{d}_{,}_{e}) abides by (1). In order to show under = _{2}, redefine m to be the smallest integer such that 23^{m} ≥ d > 23^{m}^{ –1}. Let P denote the set of those polynomials in [D] that have degrees smaller than d. Note that the polynomial _{v}_{:sink}_{t}_{:delayfunction} (det(M_{v}(t))) over is of degree less than E, and the highest degree of every indeterminate in it is less than . Thus, Lemma 4 in asserts that when indeterminates x_{d}_{,}_{e} are independently and uniformly assigned to be values k_{d}_{,}_{e} in (P) ,
Pr(_{v}_{:sink}_{t}_{:delayfunction}(det(M_{v}(t)) ≠ 0) >
Consequently, when coding coefficients of an _{2}CNC are independently and uniformly chosen from P^{ –1}(k_{d}_{,}_{e}), the probability for the code to be delay invariant is at least _{ }< (Since 5d > 3^{m}^{+1}.)
Take = _{p}, where p is an odd prime number, and let m be the integer such that 2^{m} ≥ E^{2}log_{p}E ≥ 2^{m}^{–1}, where is a constant subject to Define a homomorphism : [D] that fixes and maps D to , where is of order 2^{m}(p + (–1)^{(}^{p}^{+1)/2}) and _{p}() = . Lemma asserts the existence of such . By an argument similar to the case when = _{2}, we can prove Theorem and under = _{p}.
Take = _{q}, where q is a positive integer power of a prime p. Let D and D, respectively, represent the unit time to carry one data symbol in _{p }and _{q}. Define a homomorphism from _{p}[(D)] into _{q}[(D)] via fixing _{p} and mapping D to D. Because the mapping is onetoone, given a DI_{p}CNC (k_{d}_{,}_{e}) and a delay function t, ((k_{d}_{,}_{e}D^{t}^{(}^{d}^{,}^{e}^{)})) qualifies as an _{q}convolutional multicast too. ∎
Lemma . In , where m ≥ 0, there exists an element of order such that _{2}() = .
Proof. Let be a primitive element in . First, divides , because by Euler's totient theorem, . Let = and ��_{2}() = , where either
, 0 i m, or
, 0 i m.
Since has order and , . Thus, d ≠ 0 i m, because . Assume d = , 0 i m. Because for all i < m, in order to show i = m, it suffices to prove . When m = 1, it is trivially true that 3 is not divisible by 9. When m > 1, note that . By the induction hypothesis, . Therefore, if , then would be divisible by 9. However, this is impossible because
.
Lemma . Let p be a prime of odd characteristic. In , m ≥ 1, there exists an element of order such that _{p}() = .
Proof. Let be a primitive element in . First, , since
.
Assume . Let = and () = , k m. Since has order and , divides , which implies
.
Moreover, because , the factors , 0 i < k, can be divisible by 2 but not by 4. Therefore, k = m.
The case when can be shown analogously such that () = , where = .
Dostları ilə paylaş: 