<dd><p>Calculate and return a context vector with dot-product attention mechanism.
The dimension of the context vector equals to that of the attended_sequence.</p>
<divclass="math">
\[ \begin{align}\begin{aligned}a(s_{i-1},h_{j}) & = s_{i-1}^\mathrm{T} h_{j}\\e_{i,j} & = a(s_{i-1}, h_{j})\\a_{i,j} & = \frac{exp(e_{i,j})}{\sum_{k=1}^{T_x}{exp(e_{i,k})}}\\c_{i} & = \sum_{j=1}^{T_{x}}a_{i,j}z_{j}\end{aligned}\end{align} \]</div>
<p>where <spanclass="math">\(h_{j}\)</span> is the jth element of encoded_sequence,
<spanclass="math">\(z_{j}\)</span> is the jth element of attended_sequence,
<spanclass="math">\(s_{i-1}\)</span> is transformed_state.</p>
<dd><p>Calculate and return a context vector with dot-product attention mechanism.
The dimension of the context vector equals to that of the attended_sequence.</p>
<divclass="math">
\[ \begin{align}\begin{aligned}a(s_{i-1},h_{j}) & = s_{i-1}^\mathrm{T} h_{j}\\e_{i,j} & = a(s_{i-1}, h_{j})\\a_{i,j} & = \frac{exp(e_{i,j})}{\sum_{k=1}^{T_x}{exp(e_{i,k})}}\\c_{i} & = \sum_{j=1}^{T_{x}}a_{i,j}z_{j}\end{aligned}\end{align} \]</div>
<p>where <spanclass="math">\(h_{j}\)</span> is the jth element of encoded_sequence,
<spanclass="math">\(z_{j}\)</span> is the jth element of attended_sequence,
<spanclass="math">\(s_{i-1}\)</span> is transformed_state.</p>