Mathematical Proofs

Proof 1: Derivative of lnp

The derivative of lnp is defined as:

ddplnp=limΔp0ln(p+Δp)lnpΔp

Using the logarithm difference rule:

ln(p+Δp)lnp=ln(p+Δpp)

Rewriting the derivative:

ddplnp=limΔp0ln(1+Δpp)Δp

Using the limit property:

limx0ln(1+x)x=1

where we set x=Δpp, we get:

ln(1+Δpp)Δpp

Thus, our derivative simplifies to:

ddplnp=1p

Proof 2: Find Maximizing Entropy Using Lagrange Multipliers using Proof 1

Step 1: Define the Entropy Function

H(p1,p2,,pn)=i=1npilnpi

with the constraint:

i=1npi=1

Step 2: Construct the Lagrange Function

L(p1,p2,,pn,λ)=i=1npilnpi+λ(i=1npi1)

Step 3: Differentiate the Entropy Term

pi(pilnpi)=(lnpi+1)

Step 4: Differentiate the Lagrange Constraint

piλi=1npi=λ

Step 5: Solve for pi

(lnpi+1)+λ=0 lnpi=λ1 pi=eλ1

Since pi=1, we solve:

neλ1=1 eλ1=1n pi=1n,i

This confirms entropy is maximized when all probabilities are equal.


Confirming Maximum Entropy Using Taylor Series using Proof 1 and 2

Step 1: Define Small Deviations

Let:

pi=1n+δi,wherei=1nδi=0

Step 2: Taylor Expansion of H(p)

Expanding entropy using a second-order Taylor series:

H(p+δ)=H(p)+iHpi|p(pipi)+12i,j2Hpipj|p(pipi)(pjpj)+O(δ3)

Step 3: Compute the First-Order Term

Hpi=(lnpi+1)

At pi=1/n:

Hpi|p=lnn1

Since (pipi)=0, this term vanishes.

Step 4: Compute the Second-Order Term

2Hpi2=1pi

At pi=1/n:

2Hpi2=n

So the second-order term is:

12i(n)(pipi)2=n2i(pipi)2

Since i(pipi)20, this term is always negative, confirming concavity.

Thus, entropy is maximized at pi=1/n