- This fills query, key, value matrices for transformer attention with rotary position embedding (RoPE).
- m is the number of embeddings, h is the number of heads, and n is the per-head query/key/value length.
- x is a read-only row-major matrix of m embeddings (m rows); each embedding is h*n floats (h*n columns).
- wq, wk, wv are read-only row-major query, key, value weight matrices; each has h*n rows, h*n columns.
- q is the row-major query matrix (m rows, h*n columns); x times the transpose of wq is written to q.
- k is the row-major key matrix (m rows, h*n columns); x times the transpose of wk is written to k.
- v is the row-major value matrix (h*n rows, m columns); wv times the transpose of x is written to v.
- Each row of q and k has h subrows (one per head); each subrow is n consecutive floats; n is even.
- For every subrow of both q and k, each disjoint pair of consecutive floats is rotated separately.
- Angle of rotation for a pair depends on the pair's row index and the pair's offset in its subrow.

To receive a hint, submit unfixed code.