The derivative of \( \ln p \) is defined as:
\[ \frac{d}{dp} \ln p = \lim_{\Delta p \to 0} \frac{\ln(p + \Delta p) - \ln p}{\Delta p} \]Using the logarithm difference rule:
\[ \ln(p + \Delta p) - \ln p = \ln \left( \frac{p + \Delta p}{p} \right) \]Rewriting the derivative:
\[ \frac{d}{dp} \ln p = \lim_{\Delta p \to 0} \frac{\ln(1 + \frac{\Delta p}{p})}{\Delta p} \]Using the limit property:
\[ \lim_{x \to 0} \frac{\ln(1 + x)}{x} = 1 \]where we set \( x = \frac{\Delta p}{p} \), we get:
\[ \ln(1 + \frac{\Delta p}{p}) \approx \frac{\Delta p}{p} \]Thus, our derivative simplifies to:
\[ \frac{d}{dp} \ln p = \frac{1}{p} \]with the constraint:
\[ \sum_{i=1}^{n} p_i = 1 \]Since \( \sum p_i = 1 \), we solve:
\[ n e^{\lambda - 1} = 1 \] \[ e^{\lambda - 1} = \frac{1}{n} \] \[ p_i = \frac{1}{n}, \quad \forall i \]This confirms entropy is maximized when all probabilities are equal.
Let:
\[ p_i = \frac{1}{n} + \delta_i, \quad \text{where} \quad \sum_{i=1}^{n} \delta_i = 0 \]Expanding entropy using a second-order Taylor series:
\[ H(p + \delta) = H(p^*) + \sum_{i} \frac{\partial H}{\partial p_i} \Big|_{p^*} (p_i - p_i^*) + \frac{1}{2} \sum_{i,j} \frac{\partial^2 H}{\partial p_i \partial p_j} \Big|_{p^*} (p_i - p_i^*) (p_j - p_j^*) + O(\delta^3) \]At \( p_i^* = 1/n \):
\[ \frac{\partial H}{\partial p_i} \Big|_{p^*} = \ln n - 1 \]Since \( \sum (p_i - p_i^*) = 0 \), this term vanishes.
At \( p_i^* = 1/n \):
\[ \frac{\partial^2 H}{\partial p_i^2} = -n \]So the second-order term is:
\[ \frac{1}{2} \sum_i (-n) (p_i - p_i^*)^2 = -\frac{n}{2} \sum_i (p_i - p_i^*)^2 \]Since \( -\sum_i (p_i - p_i^*)^2 \geq 0 \), this term is always negative, confirming concavity.
Thus, entropy is maximized at \( p_i = 1/n \)