Index: head/share/man/man9/fpu_kern.9 =================================================================== --- head/share/man/man9/fpu_kern.9 (revision 366671) +++ head/share/man/man9/fpu_kern.9 (revision 366672) @@ -1,214 +1,215 @@ .\" Copyright (c) 2014 .\" Konstantin Belousov . All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd March 7, 2018 +.Dd October 13, 2020 .Dt FPU_KERN 9 .Os .Sh NAME .Nm fpu_kern .Nd "facility to use the FPU in the kernel" .Sh SYNOPSIS +.In machine/fpu.h .Ft struct fpu_kern_ctx * .Fn fpu_kern_alloc_ctx "u_int flags" .Ft void .Fn fpu_kern_free_ctx "struct fpu_kern_ctx *ctx" .Ft void .Fn fpu_kern_enter "struct thread *td" "struct fpu_kern_ctx *ctx" "u_int flags" .Ft int .Fn fpu_kern_leave "struct thread *td" "struct fpu_kern_ctx *ctx" .Ft int .Fn fpu_kern_thread "u_int flags" .Ft int .Fn is_fpu_kern_thread "u_int flags" .Sh DESCRIPTION The .Nm family of functions allows the use of FPU hardware in kernel code. Modern FPUs are not limited to providing hardware implementation for floating point arithmetic; they offer advanced accelerators for cryptography and other computational-intensive algorithms. These facilities share registers with the FPU hardware. .Pp Typical kernel code does not need access to the FPU. Saving a large register file on each entry to the kernel would waste time. When kernel code uses the FPU, the current FPU state must be saved to avoid corrupting the user-mode state, and vice versa. .Pp The management of the save and restore is automatic. The processor catches accesses to the FPU registers when the non-current context tries to access them. Explicit calls are required for the allocation of the save area and the notification of the start and end of the code using the FPU. .Pp The .Fn fpu_kern_alloc_ctx function allocates the memory used by .Nm to track the use of the FPU hardware state and the related software state. The .Fn fpu_kern_alloc_ctx function requires the .Fa flags argument, which currently accepts the following flags: .Bl -tag -width ".Dv FPU_KERN_NOWAIT" -offset indent .It Dv FPU_KERN_NOWAIT Do not wait for the available memory if the request could not be satisfied without sleep. .It 0 No special handling is required. .El .Pp The function returns the allocated context area, or .Va NULL if the allocation failed. .Pp The .Fn fpu_kern_free_ctx function frees the context previously allocated by .Fn fpu_kern_alloc_ctx . .Pp The .Fn fpu_kern_enter function designates the start of the region of kernel code where the use of the FPU is allowed. Its arguments are: .Bl -tag -width ".Fa ctx" -offset indent .It Fa td Currently must be .Va curthread . .It Fa ctx The context save area previously allocated by .Fn fpu_kern_alloc_ctx and not currently in use by another call to .Fn fpu_kern_enter . .It Fa flags This argument currently accepts the following flags: .Bl -tag -width ".Dv FPU_KERN_NORMAL" -offset indent .It Dv FPU_KERN_NORMAL Indicates that the caller intends to access the full FPU state. Must be specified currently. .It Dv FPU_KERN_KTHR Indicates that no saving of the current FPU state should be performed, if the thread called .Xr fpu_kern_thread 9 function. This is intended to minimize code duplication in callers which could be used from both kernel thread and syscall contexts. The .Fn fpu_kern_leave function correctly handles such contexts. .It Dv FPU_KERN_NOCTX Avoid nesting save area. If the flag is specified, the .Fa ctx must be passed as .Va NULL . The flag should only be used for really short code blocks which can be executed in a critical section. It avoids the need to allocate the FPU context by the cost of increased system latency. .El .El .Pp The function does not sleep or block. It could cause an FPU trap during execution, and on the first FPU access after the function returns, as well as after each context switch. On i386 and amd64 this will be the .Nm Device Not Available exception (see Intel Software Developer Manual for the reference). .Pp The .Fn fpu_kern_leave function ends the region started by .Fn fpu_kern_enter . It is erroneous to use the FPU in the kernel before .Fn fpu_kern_enter or after .Fn fpu_kern_leave . The function takes the .Fa td thread argument, which currently must be .Va curthread , and the .Fa ctx context pointer, previously passed to .Fn fpu_kern_enter . After the function returns, the context may be freed or reused by another invocation of .Fn fpu_kern_enter . The function always returns 0. .Pp The .Fn fpu_kern_thread function enables an optimization for threads which never leave to the usermode. The current thread will reuse the usermode save area for the kernel FPU state instead of requiring an explicitly allocated context. There are no flags defined for the function, and no error states that the function returns. Once this function has been called, neither .Fn fpu_kern_enter nor .Fn fpu_kern_leave is required to be called and the fpu is available for use in the calling thread. .Pp The .Fn is_fpu_kern_thread function returns the boolean indicating whether the current thread entered the mode enabled by .Fn fpu_kern_thread . There is currently no flags defined for the function, the return value is true if the current thread have the permanent FPU save area, and false otherwise. .Sh NOTES The .Nm is currently implemented only for the i386, amd64, and arm64 architectures. .Pp There is no way to handle floating point exceptions raised from kernel mode. .Pp The unused .Fa flags arguments to the .Nm functions are to be extended to allow specification of the set of the FPU hardware state used by the code region. This would allow optimizations of saving and restoring the state. .Sh AUTHORS The .Nm facitily and this manual page were written by .An Konstantin Belousov Aq Mt kib@FreeBSD.org . The arm64 support was added by .An Andrew Turner Aq Mt andrew@FreeBSD.org . .Sh BUGS .Fn fpu_kern_leave should probably have type .Ft void (like .Fn fpu_kern_enter ) . Index: head/sys/crypto/aesni/aesni.c =================================================================== --- head/sys/crypto/aesni/aesni.c (revision 366671) +++ head/sys/crypto/aesni/aesni.c (revision 366672) @@ -1,917 +1,913 @@ /*- * Copyright (c) 2005-2008 Pawel Jakub Dawidek * Copyright (c) 2010 Konstantin Belousov * Copyright (c) 2014 The FreeBSD Foundation * Copyright (c) 2017 Conrad Meyer * All rights reserved. * * Portions of this software were developed by John-Mark Gurney * under sponsorship of the FreeBSD Foundation and * Rubicon Communications, LLC (Netgate). * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include -#if defined(__i386__) -#include -#elif defined(__amd64__) #include -#endif static struct mtx_padalign *ctx_mtx; static struct fpu_kern_ctx **ctx_fpu; struct aesni_softc { int32_t cid; bool has_aes; bool has_sha; }; #define ACQUIRE_CTX(i, ctx) \ do { \ (i) = PCPU_GET(cpuid); \ mtx_lock(&ctx_mtx[(i)]); \ (ctx) = ctx_fpu[(i)]; \ } while (0) #define RELEASE_CTX(i, ctx) \ do { \ mtx_unlock(&ctx_mtx[(i)]); \ (i) = -1; \ (ctx) = NULL; \ } while (0) static int aesni_cipher_setup(struct aesni_session *ses, const struct crypto_session_params *csp); static int aesni_cipher_process(struct aesni_session *ses, struct cryptop *crp); static int aesni_cipher_crypt(struct aesni_session *ses, struct cryptop *crp, const struct crypto_session_params *csp); static int aesni_cipher_mac(struct aesni_session *ses, struct cryptop *crp, const struct crypto_session_params *csp); MALLOC_DEFINE(M_AESNI, "aesni_data", "AESNI Data"); static void aesni_identify(driver_t *drv, device_t parent) { /* NB: order 10 is so we get attached after h/w devices */ if (device_find_child(parent, "aesni", -1) == NULL && BUS_ADD_CHILD(parent, 10, "aesni", -1) == 0) panic("aesni: could not attach"); } static void detect_cpu_features(bool *has_aes, bool *has_sha) { *has_aes = ((cpu_feature2 & CPUID2_AESNI) != 0 && (cpu_feature2 & CPUID2_SSE41) != 0); *has_sha = ((cpu_stdext_feature & CPUID_STDEXT_SHA) != 0 && (cpu_feature2 & CPUID2_SSSE3) != 0); } static int aesni_probe(device_t dev) { bool has_aes, has_sha; detect_cpu_features(&has_aes, &has_sha); if (!has_aes && !has_sha) { device_printf(dev, "No AES or SHA support.\n"); return (EINVAL); } else if (has_aes && has_sha) device_set_desc(dev, "AES-CBC,AES-CCM,AES-GCM,AES-ICM,AES-XTS,SHA1,SHA256"); else if (has_aes) device_set_desc(dev, "AES-CBC,AES-CCM,AES-GCM,AES-ICM,AES-XTS"); else device_set_desc(dev, "SHA1,SHA256"); return (0); } static void aesni_cleanctx(void) { int i; /* XXX - no way to return driverid */ CPU_FOREACH(i) { if (ctx_fpu[i] != NULL) { mtx_destroy(&ctx_mtx[i]); fpu_kern_free_ctx(ctx_fpu[i]); } ctx_fpu[i] = NULL; } free(ctx_mtx, M_AESNI); ctx_mtx = NULL; free(ctx_fpu, M_AESNI); ctx_fpu = NULL; } static int aesni_attach(device_t dev) { struct aesni_softc *sc; int i; sc = device_get_softc(dev); sc->cid = crypto_get_driverid(dev, sizeof(struct aesni_session), CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC | CRYPTOCAP_F_ACCEL_SOFTWARE); if (sc->cid < 0) { device_printf(dev, "Could not get crypto driver id.\n"); return (ENOMEM); } ctx_mtx = malloc(sizeof *ctx_mtx * (mp_maxid + 1), M_AESNI, M_WAITOK|M_ZERO); ctx_fpu = malloc(sizeof *ctx_fpu * (mp_maxid + 1), M_AESNI, M_WAITOK|M_ZERO); CPU_FOREACH(i) { #ifdef __amd64__ ctx_fpu[i] = fpu_kern_alloc_ctx_domain( pcpu_find(i)->pc_domain, FPU_KERN_NORMAL); #else ctx_fpu[i] = fpu_kern_alloc_ctx(FPU_KERN_NORMAL); #endif mtx_init(&ctx_mtx[i], "anifpumtx", NULL, MTX_DEF|MTX_NEW); } detect_cpu_features(&sc->has_aes, &sc->has_sha); return (0); } static int aesni_detach(device_t dev) { struct aesni_softc *sc; sc = device_get_softc(dev); crypto_unregister_all(sc->cid); aesni_cleanctx(); return (0); } static bool aesni_auth_supported(struct aesni_softc *sc, const struct crypto_session_params *csp) { if (!sc->has_sha) return (false); switch (csp->csp_auth_alg) { case CRYPTO_SHA1: case CRYPTO_SHA2_224: case CRYPTO_SHA2_256: case CRYPTO_SHA1_HMAC: case CRYPTO_SHA2_224_HMAC: case CRYPTO_SHA2_256_HMAC: break; default: return (false); } return (true); } static bool aesni_cipher_supported(struct aesni_softc *sc, const struct crypto_session_params *csp) { if (!sc->has_aes) return (false); switch (csp->csp_cipher_alg) { case CRYPTO_AES_CBC: case CRYPTO_AES_ICM: if (csp->csp_ivlen != AES_BLOCK_LEN) return (false); return (sc->has_aes); case CRYPTO_AES_XTS: if (csp->csp_ivlen != AES_XTS_IV_LEN) return (false); return (sc->has_aes); default: return (false); } } static int aesni_probesession(device_t dev, const struct crypto_session_params *csp) { struct aesni_softc *sc; sc = device_get_softc(dev); if ((csp->csp_flags & ~(CSP_F_SEPARATE_OUTPUT | CSP_F_SEPARATE_AAD)) != 0) return (EINVAL); switch (csp->csp_mode) { case CSP_MODE_DIGEST: if (!aesni_auth_supported(sc, csp)) return (EINVAL); break; case CSP_MODE_CIPHER: if (!aesni_cipher_supported(sc, csp)) return (EINVAL); break; case CSP_MODE_AEAD: switch (csp->csp_cipher_alg) { case CRYPTO_AES_NIST_GCM_16: if (csp->csp_auth_mlen != 0 && csp->csp_auth_mlen != GMAC_DIGEST_LEN) return (EINVAL); if (csp->csp_ivlen != AES_GCM_IV_LEN || !sc->has_aes) return (EINVAL); break; case CRYPTO_AES_CCM_16: if (csp->csp_auth_mlen != 0 && csp->csp_auth_mlen != AES_CBC_MAC_HASH_LEN) return (EINVAL); if (csp->csp_ivlen != AES_CCM_IV_LEN || !sc->has_aes) return (EINVAL); break; default: return (EINVAL); } break; case CSP_MODE_ETA: if (!aesni_auth_supported(sc, csp) || !aesni_cipher_supported(sc, csp)) return (EINVAL); break; default: return (EINVAL); } return (CRYPTODEV_PROBE_ACCEL_SOFTWARE); } static int aesni_newsession(device_t dev, crypto_session_t cses, const struct crypto_session_params *csp) { struct aesni_softc *sc; struct aesni_session *ses; int error; sc = device_get_softc(dev); ses = crypto_get_driver_session(cses); switch (csp->csp_mode) { case CSP_MODE_DIGEST: case CSP_MODE_CIPHER: case CSP_MODE_AEAD: case CSP_MODE_ETA: break; default: return (EINVAL); } error = aesni_cipher_setup(ses, csp); if (error != 0) { CRYPTDEB("setup failed"); return (error); } return (0); } static int aesni_process(device_t dev, struct cryptop *crp, int hint __unused) { struct aesni_session *ses; int error; ses = crypto_get_driver_session(crp->crp_session); error = aesni_cipher_process(ses, crp); crp->crp_etype = error; crypto_done(crp); return (0); } static uint8_t * aesni_cipher_alloc(struct cryptop *crp, int start, int length, bool *allocated) { uint8_t *addr; addr = crypto_contiguous_subsegment(crp, start, length); if (addr != NULL) { *allocated = false; return (addr); } addr = malloc(length, M_AESNI, M_NOWAIT); if (addr != NULL) { *allocated = true; crypto_copydata(crp, start, length, addr); } else *allocated = false; return (addr); } static device_method_t aesni_methods[] = { DEVMETHOD(device_identify, aesni_identify), DEVMETHOD(device_probe, aesni_probe), DEVMETHOD(device_attach, aesni_attach), DEVMETHOD(device_detach, aesni_detach), DEVMETHOD(cryptodev_probesession, aesni_probesession), DEVMETHOD(cryptodev_newsession, aesni_newsession), DEVMETHOD(cryptodev_process, aesni_process), DEVMETHOD_END }; static driver_t aesni_driver = { "aesni", aesni_methods, sizeof(struct aesni_softc), }; static devclass_t aesni_devclass; DRIVER_MODULE(aesni, nexus, aesni_driver, aesni_devclass, 0, 0); MODULE_VERSION(aesni, 1); MODULE_DEPEND(aesni, crypto, 1, 1, 1); static int intel_sha1_update(void *vctx, const void *vdata, u_int datalen) { struct sha1_ctxt *ctx = vctx; const char *data = vdata; size_t gaplen; size_t gapstart; size_t off; size_t copysiz; u_int blocks; off = 0; /* Do any aligned blocks without redundant copying. */ if (datalen >= 64 && ctx->count % 64 == 0) { blocks = datalen / 64; ctx->c.b64[0] += blocks * 64 * 8; intel_sha1_step(ctx->h.b32, data + off, blocks); off += blocks * 64; } while (off < datalen) { gapstart = ctx->count % 64; gaplen = 64 - gapstart; copysiz = (gaplen < datalen - off) ? gaplen : datalen - off; bcopy(&data[off], &ctx->m.b8[gapstart], copysiz); ctx->count += copysiz; ctx->count %= 64; ctx->c.b64[0] += copysiz * 8; if (ctx->count % 64 == 0) intel_sha1_step(ctx->h.b32, (void *)ctx->m.b8, 1); off += copysiz; } return (0); } static void SHA1_Init_fn(void *ctx) { sha1_init(ctx); } static void SHA1_Finalize_fn(void *digest, void *ctx) { sha1_result(ctx, digest); } static int intel_sha256_update(void *vctx, const void *vdata, u_int len) { SHA256_CTX *ctx = vctx; uint64_t bitlen; uint32_t r; u_int blocks; const unsigned char *src = vdata; /* Number of bytes left in the buffer from previous updates */ r = (ctx->count >> 3) & 0x3f; /* Convert the length into a number of bits */ bitlen = len << 3; /* Update number of bits */ ctx->count += bitlen; /* Handle the case where we don't need to perform any transforms */ if (len < 64 - r) { memcpy(&ctx->buf[r], src, len); return (0); } /* Finish the current block */ memcpy(&ctx->buf[r], src, 64 - r); intel_sha256_step(ctx->state, ctx->buf, 1); src += 64 - r; len -= 64 - r; /* Perform complete blocks */ if (len >= 64) { blocks = len / 64; intel_sha256_step(ctx->state, src, blocks); src += blocks * 64; len -= blocks * 64; } /* Copy left over data into buffer */ memcpy(ctx->buf, src, len); return (0); } static void SHA224_Init_fn(void *ctx) { SHA224_Init(ctx); } static void SHA224_Finalize_fn(void *digest, void *ctx) { SHA224_Final(digest, ctx); } static void SHA256_Init_fn(void *ctx) { SHA256_Init(ctx); } static void SHA256_Finalize_fn(void *digest, void *ctx) { SHA256_Final(digest, ctx); } static int aesni_authprepare(struct aesni_session *ses, int klen) { if (klen > SHA1_BLOCK_LEN) return (EINVAL); if ((ses->hmac && klen == 0) || (!ses->hmac && klen != 0)) return (EINVAL); return (0); } static int aesni_cipherprepare(const struct crypto_session_params *csp) { switch (csp->csp_cipher_alg) { case CRYPTO_AES_ICM: case CRYPTO_AES_NIST_GCM_16: case CRYPTO_AES_CCM_16: case CRYPTO_AES_CBC: switch (csp->csp_cipher_klen * 8) { case 128: case 192: case 256: break; default: CRYPTDEB("invalid CBC/ICM/GCM key length"); return (EINVAL); } break; case CRYPTO_AES_XTS: switch (csp->csp_cipher_klen * 8) { case 256: case 512: break; default: CRYPTDEB("invalid XTS key length"); return (EINVAL); } break; default: return (EINVAL); } return (0); } static int aesni_cipher_setup(struct aesni_session *ses, const struct crypto_session_params *csp) { struct fpu_kern_ctx *ctx; int kt, ctxidx, error; switch (csp->csp_auth_alg) { case CRYPTO_SHA1_HMAC: ses->hmac = true; /* FALLTHROUGH */ case CRYPTO_SHA1: ses->hash_len = SHA1_HASH_LEN; ses->hash_init = SHA1_Init_fn; ses->hash_update = intel_sha1_update; ses->hash_finalize = SHA1_Finalize_fn; break; case CRYPTO_SHA2_224_HMAC: ses->hmac = true; /* FALLTHROUGH */ case CRYPTO_SHA2_224: ses->hash_len = SHA2_224_HASH_LEN; ses->hash_init = SHA224_Init_fn; ses->hash_update = intel_sha256_update; ses->hash_finalize = SHA224_Finalize_fn; break; case CRYPTO_SHA2_256_HMAC: ses->hmac = true; /* FALLTHROUGH */ case CRYPTO_SHA2_256: ses->hash_len = SHA2_256_HASH_LEN; ses->hash_init = SHA256_Init_fn; ses->hash_update = intel_sha256_update; ses->hash_finalize = SHA256_Finalize_fn; break; } if (ses->hash_len != 0) { if (csp->csp_auth_mlen == 0) ses->mlen = ses->hash_len; else ses->mlen = csp->csp_auth_mlen; error = aesni_authprepare(ses, csp->csp_auth_klen); if (error != 0) return (error); } error = aesni_cipherprepare(csp); if (error != 0) return (error); kt = is_fpu_kern_thread(0) || (csp->csp_cipher_alg == 0); if (!kt) { ACQUIRE_CTX(ctxidx, ctx); fpu_kern_enter(curthread, ctx, FPU_KERN_NORMAL | FPU_KERN_KTHR); } error = 0; if (csp->csp_cipher_key != NULL) aesni_cipher_setup_common(ses, csp, csp->csp_cipher_key, csp->csp_cipher_klen); if (!kt) { fpu_kern_leave(curthread, ctx); RELEASE_CTX(ctxidx, ctx); } return (error); } static int aesni_cipher_process(struct aesni_session *ses, struct cryptop *crp) { const struct crypto_session_params *csp; struct fpu_kern_ctx *ctx; int error, ctxidx; bool kt; csp = crypto_get_params(crp->crp_session); switch (csp->csp_cipher_alg) { case CRYPTO_AES_ICM: case CRYPTO_AES_NIST_GCM_16: case CRYPTO_AES_CCM_16: if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0) return (EINVAL); break; case CRYPTO_AES_CBC: case CRYPTO_AES_XTS: /* CBC & XTS can only handle full blocks for now */ if ((crp->crp_payload_length % AES_BLOCK_LEN) != 0) return (EINVAL); break; } ctx = NULL; ctxidx = 0; error = 0; kt = is_fpu_kern_thread(0); if (!kt) { ACQUIRE_CTX(ctxidx, ctx); fpu_kern_enter(curthread, ctx, FPU_KERN_NORMAL | FPU_KERN_KTHR); } /* Do work */ if (csp->csp_mode == CSP_MODE_ETA) { if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { error = aesni_cipher_crypt(ses, crp, csp); if (error == 0) error = aesni_cipher_mac(ses, crp, csp); } else { error = aesni_cipher_mac(ses, crp, csp); if (error == 0) error = aesni_cipher_crypt(ses, crp, csp); } } else if (csp->csp_mode == CSP_MODE_DIGEST) error = aesni_cipher_mac(ses, crp, csp); else error = aesni_cipher_crypt(ses, crp, csp); if (!kt) { fpu_kern_leave(curthread, ctx); RELEASE_CTX(ctxidx, ctx); } return (error); } static int aesni_cipher_crypt(struct aesni_session *ses, struct cryptop *crp, const struct crypto_session_params *csp) { uint8_t iv[AES_BLOCK_LEN], tag[GMAC_DIGEST_LEN]; uint8_t *authbuf, *buf, *outbuf; int error; bool encflag, allocated, authallocated, outallocated, outcopy; buf = aesni_cipher_alloc(crp, crp->crp_payload_start, crp->crp_payload_length, &allocated); if (buf == NULL) return (ENOMEM); outallocated = false; authallocated = false; authbuf = NULL; if (csp->csp_cipher_alg == CRYPTO_AES_NIST_GCM_16 || csp->csp_cipher_alg == CRYPTO_AES_CCM_16) { if (crp->crp_aad != NULL) authbuf = crp->crp_aad; else authbuf = aesni_cipher_alloc(crp, crp->crp_aad_start, crp->crp_aad_length, &authallocated); if (authbuf == NULL) { error = ENOMEM; goto out; } } if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { outbuf = crypto_buffer_contiguous_subsegment(&crp->crp_obuf, crp->crp_payload_output_start, crp->crp_payload_length); if (outbuf == NULL) { outcopy = true; if (allocated) outbuf = buf; else { outbuf = malloc(crp->crp_payload_length, M_AESNI, M_NOWAIT); if (outbuf == NULL) { error = ENOMEM; goto out; } outallocated = true; } } else outcopy = false; } else { outbuf = buf; outcopy = allocated; } error = 0; encflag = CRYPTO_OP_IS_ENCRYPT(crp->crp_op); if (crp->crp_cipher_key != NULL) aesni_cipher_setup_common(ses, csp, crp->crp_cipher_key, csp->csp_cipher_klen); crypto_read_iv(crp, iv); switch (csp->csp_cipher_alg) { case CRYPTO_AES_CBC: if (encflag) aesni_encrypt_cbc(ses->rounds, ses->enc_schedule, crp->crp_payload_length, buf, outbuf, iv); else { if (buf != outbuf) memcpy(outbuf, buf, crp->crp_payload_length); aesni_decrypt_cbc(ses->rounds, ses->dec_schedule, crp->crp_payload_length, outbuf, iv); } break; case CRYPTO_AES_ICM: /* encryption & decryption are the same */ aesni_encrypt_icm(ses->rounds, ses->enc_schedule, crp->crp_payload_length, buf, outbuf, iv); break; case CRYPTO_AES_XTS: if (encflag) aesni_encrypt_xts(ses->rounds, ses->enc_schedule, ses->xts_schedule, crp->crp_payload_length, buf, outbuf, iv); else aesni_decrypt_xts(ses->rounds, ses->dec_schedule, ses->xts_schedule, crp->crp_payload_length, buf, outbuf, iv); break; case CRYPTO_AES_NIST_GCM_16: if (encflag) { memset(tag, 0, sizeof(tag)); AES_GCM_encrypt(buf, outbuf, authbuf, iv, tag, crp->crp_payload_length, crp->crp_aad_length, csp->csp_ivlen, ses->enc_schedule, ses->rounds); crypto_copyback(crp, crp->crp_digest_start, sizeof(tag), tag); } else { crypto_copydata(crp, crp->crp_digest_start, sizeof(tag), tag); if (!AES_GCM_decrypt(buf, outbuf, authbuf, iv, tag, crp->crp_payload_length, crp->crp_aad_length, csp->csp_ivlen, ses->enc_schedule, ses->rounds)) error = EBADMSG; } break; case CRYPTO_AES_CCM_16: if (encflag) { memset(tag, 0, sizeof(tag)); AES_CCM_encrypt(buf, outbuf, authbuf, iv, tag, crp->crp_payload_length, crp->crp_aad_length, csp->csp_ivlen, ses->enc_schedule, ses->rounds); crypto_copyback(crp, crp->crp_digest_start, sizeof(tag), tag); } else { crypto_copydata(crp, crp->crp_digest_start, sizeof(tag), tag); if (!AES_CCM_decrypt(buf, outbuf, authbuf, iv, tag, crp->crp_payload_length, crp->crp_aad_length, csp->csp_ivlen, ses->enc_schedule, ses->rounds)) error = EBADMSG; } break; } if (outcopy && error == 0) crypto_copyback(crp, CRYPTO_HAS_OUTPUT_BUFFER(crp) ? crp->crp_payload_output_start : crp->crp_payload_start, crp->crp_payload_length, outbuf); out: if (allocated) zfree(buf, M_AESNI); if (authallocated) zfree(authbuf, M_AESNI); if (outallocated) zfree(outbuf, M_AESNI); explicit_bzero(iv, sizeof(iv)); explicit_bzero(tag, sizeof(tag)); return (error); } static int aesni_cipher_mac(struct aesni_session *ses, struct cryptop *crp, const struct crypto_session_params *csp) { union { struct SHA256Context sha2 __aligned(16); struct sha1_ctxt sha1 __aligned(16); } sctx; uint32_t res[SHA2_256_HASH_LEN / sizeof(uint32_t)]; const uint8_t *key; int i, keylen; if (crp->crp_auth_key != NULL) key = crp->crp_auth_key; else key = csp->csp_auth_key; keylen = csp->csp_auth_klen; if (ses->hmac) { uint8_t hmac_key[SHA1_BLOCK_LEN] __aligned(16); /* Inner hash: (K ^ IPAD) || data */ ses->hash_init(&sctx); for (i = 0; i < keylen; i++) hmac_key[i] = key[i] ^ HMAC_IPAD_VAL; for (i = keylen; i < sizeof(hmac_key); i++) hmac_key[i] = 0 ^ HMAC_IPAD_VAL; ses->hash_update(&sctx, hmac_key, sizeof(hmac_key)); if (crp->crp_aad != NULL) ses->hash_update(&sctx, crp->crp_aad, crp->crp_aad_length); else crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length, ses->hash_update, &sctx); if (CRYPTO_HAS_OUTPUT_BUFFER(crp) && CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) crypto_apply_buf(&crp->crp_obuf, crp->crp_payload_output_start, crp->crp_payload_length, ses->hash_update, &sctx); else crypto_apply(crp, crp->crp_payload_start, crp->crp_payload_length, ses->hash_update, &sctx); ses->hash_finalize(res, &sctx); /* Outer hash: (K ^ OPAD) || inner hash */ ses->hash_init(&sctx); for (i = 0; i < keylen; i++) hmac_key[i] = key[i] ^ HMAC_OPAD_VAL; for (i = keylen; i < sizeof(hmac_key); i++) hmac_key[i] = 0 ^ HMAC_OPAD_VAL; ses->hash_update(&sctx, hmac_key, sizeof(hmac_key)); ses->hash_update(&sctx, res, ses->hash_len); ses->hash_finalize(res, &sctx); explicit_bzero(hmac_key, sizeof(hmac_key)); } else { ses->hash_init(&sctx); if (crp->crp_aad != NULL) ses->hash_update(&sctx, crp->crp_aad, crp->crp_aad_length); else crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length, ses->hash_update, &sctx); if (CRYPTO_HAS_OUTPUT_BUFFER(crp) && CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) crypto_apply_buf(&crp->crp_obuf, crp->crp_payload_output_start, crp->crp_payload_length, ses->hash_update, &sctx); else crypto_apply(crp, crp->crp_payload_start, crp->crp_payload_length, ses->hash_update, &sctx); ses->hash_finalize(res, &sctx); } if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) { uint32_t res2[SHA2_256_HASH_LEN / sizeof(uint32_t)]; crypto_copydata(crp, crp->crp_digest_start, ses->mlen, res2); if (timingsafe_bcmp(res, res2, ses->mlen) != 0) return (EBADMSG); explicit_bzero(res2, sizeof(res2)); } else crypto_copyback(crp, crp->crp_digest_start, ses->mlen, res); explicit_bzero(res, sizeof(res)); return (0); } Index: head/sys/crypto/aesni/aesni.h =================================================================== --- head/sys/crypto/aesni/aesni.h (revision 366671) +++ head/sys/crypto/aesni/aesni.h (revision 366672) @@ -1,126 +1,122 @@ /*- * Copyright (c) 2010 Konstantin Belousov * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _AESNI_H_ #define _AESNI_H_ #include #include #include #include #if defined(__amd64__) || defined(__i386__) #include #include #include #include -#endif -#if defined(__i386__) -#include -#elif defined(__amd64__) #include #endif #define AES128_ROUNDS 10 #define AES192_ROUNDS 12 #define AES256_ROUNDS 14 #define AES_SCHED_LEN ((AES256_ROUNDS + 1) * AES_BLOCK_LEN) struct aesni_session { uint8_t enc_schedule[AES_SCHED_LEN] __aligned(16); uint8_t dec_schedule[AES_SCHED_LEN] __aligned(16); uint8_t xts_schedule[AES_SCHED_LEN] __aligned(16); int rounds; /* uint8_t *ses_ictx; */ /* uint8_t *ses_octx; */ int used; int mlen; int hash_len; void (*hash_init)(void *); int (*hash_update)(void *, const void *, u_int); void (*hash_finalize)(void *, void *); bool hmac; }; /* * Internal functions, implemented in assembler. */ void aesni_set_enckey(const uint8_t *userkey, uint8_t *encrypt_schedule /*__aligned(16)*/, int number_of_rounds); void aesni_set_deckey(const uint8_t *encrypt_schedule /*__aligned(16)*/, uint8_t *decrypt_schedule /*__aligned(16)*/, int number_of_rounds); /* * Slightly more public interfaces. */ void aesni_encrypt_cbc(int rounds, const void *key_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to, const uint8_t iv[__min_size(AES_BLOCK_LEN)]); void aesni_decrypt_cbc(int rounds, const void *key_schedule /*__aligned(16)*/, size_t len, uint8_t *buf, const uint8_t iv[__min_size(AES_BLOCK_LEN)]); void aesni_encrypt_ecb(int rounds, const void *key_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to); void aesni_decrypt_ecb(int rounds, const void *key_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to); void aesni_encrypt_icm(int rounds, const void *key_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to, const uint8_t iv[__min_size(AES_BLOCK_LEN)]); void aesni_encrypt_xts(int rounds, const void *data_schedule /*__aligned(16)*/, const void *tweak_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to, const uint8_t iv[__min_size(AES_BLOCK_LEN)]); void aesni_decrypt_xts(int rounds, const void *data_schedule /*__aligned(16)*/, const void *tweak_schedule /*__aligned(16)*/, size_t len, const uint8_t *from, uint8_t *to, const uint8_t iv[__min_size(AES_BLOCK_LEN)]); /* GCM & GHASH functions */ void AES_GCM_encrypt(const unsigned char *in, unsigned char *out, const unsigned char *addt, const unsigned char *ivec, unsigned char *tag, uint32_t nbytes, uint32_t abytes, int ibytes, const unsigned char *key, int nr); int AES_GCM_decrypt(const unsigned char *in, unsigned char *out, const unsigned char *addt, const unsigned char *ivec, const unsigned char *tag, uint32_t nbytes, uint32_t abytes, int ibytes, const unsigned char *key, int nr); /* CCM + CBC-MAC functions */ void AES_CCM_encrypt(const unsigned char *in, unsigned char *out, const unsigned char *addt, const unsigned char *ivec, unsigned char *tag, uint32_t nbytes, uint32_t abytes, int ibytes, const unsigned char *key, int nr); int AES_CCM_decrypt(const unsigned char *in, unsigned char *out, const unsigned char *addt, const unsigned char *ivec, const unsigned char *tag, uint32_t nbytes, uint32_t abytes, int ibytes, const unsigned char *key, int nr); void aesni_cipher_setup_common(struct aesni_session *ses, const struct crypto_session_params *csp, const uint8_t *key, int keylen); #endif /* _AESNI_H_ */ Index: head/sys/crypto/blake2/blake2_cryptodev.c =================================================================== --- head/sys/crypto/blake2/blake2_cryptodev.c (revision 366671) +++ head/sys/crypto/blake2/blake2_cryptodev.c (revision 366672) @@ -1,418 +1,414 @@ /*- * Copyright (c) 2018 Conrad Meyer * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include -#if defined(__amd64__) #include -#elif defined(__i386__) -#include -#endif struct blake2_session { size_t mlen; }; CTASSERT((size_t)BLAKE2B_KEYBYTES > (size_t)BLAKE2S_KEYBYTES); struct blake2_softc { bool dying; int32_t cid; struct rwlock lock; }; static struct mtx_padalign *ctx_mtx; static struct fpu_kern_ctx **ctx_fpu; #define ACQUIRE_CTX(i, ctx) \ do { \ (i) = PCPU_GET(cpuid); \ mtx_lock(&ctx_mtx[(i)]); \ (ctx) = ctx_fpu[(i)]; \ } while (0) #define RELEASE_CTX(i, ctx) \ do { \ mtx_unlock(&ctx_mtx[(i)]); \ (i) = -1; \ (ctx) = NULL; \ } while (0) static int blake2_cipher_setup(struct blake2_session *ses, const struct crypto_session_params *csp); static int blake2_cipher_process(struct blake2_session *ses, struct cryptop *crp); MALLOC_DEFINE(M_BLAKE2, "blake2_data", "Blake2 Data"); static void blake2_identify(driver_t *drv, device_t parent) { /* NB: order 10 is so we get attached after h/w devices */ if (device_find_child(parent, "blaketwo", -1) == NULL && BUS_ADD_CHILD(parent, 10, "blaketwo", -1) == 0) panic("blaketwo: could not attach"); } static int blake2_probe(device_t dev) { device_set_desc(dev, "Blake2"); return (0); } static void blake2_cleanctx(void) { int i; /* XXX - no way to return driverid */ CPU_FOREACH(i) { if (ctx_fpu[i] != NULL) { mtx_destroy(&ctx_mtx[i]); fpu_kern_free_ctx(ctx_fpu[i]); } ctx_fpu[i] = NULL; } free(ctx_mtx, M_BLAKE2); ctx_mtx = NULL; free(ctx_fpu, M_BLAKE2); ctx_fpu = NULL; } static int blake2_attach(device_t dev) { struct blake2_softc *sc; int i; sc = device_get_softc(dev); sc->dying = false; sc->cid = crypto_get_driverid(dev, sizeof(struct blake2_session), CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC | CRYPTOCAP_F_ACCEL_SOFTWARE); if (sc->cid < 0) { device_printf(dev, "Could not get crypto driver id.\n"); return (ENOMEM); } ctx_mtx = malloc(sizeof(*ctx_mtx) * (mp_maxid + 1), M_BLAKE2, M_WAITOK | M_ZERO); ctx_fpu = malloc(sizeof(*ctx_fpu) * (mp_maxid + 1), M_BLAKE2, M_WAITOK | M_ZERO); CPU_FOREACH(i) { #ifdef __amd64__ ctx_fpu[i] = fpu_kern_alloc_ctx_domain( pcpu_find(i)->pc_domain, FPU_KERN_NORMAL); #else ctx_fpu[i] = fpu_kern_alloc_ctx(FPU_KERN_NORMAL); #endif mtx_init(&ctx_mtx[i], "bl2fpumtx", NULL, MTX_DEF | MTX_NEW); } rw_init(&sc->lock, "blake2_lock"); return (0); } static int blake2_detach(device_t dev) { struct blake2_softc *sc; sc = device_get_softc(dev); rw_wlock(&sc->lock); sc->dying = true; rw_wunlock(&sc->lock); crypto_unregister_all(sc->cid); rw_destroy(&sc->lock); blake2_cleanctx(); return (0); } static int blake2_probesession(device_t dev, const struct crypto_session_params *csp) { if (csp->csp_flags != 0) return (EINVAL); switch (csp->csp_mode) { case CSP_MODE_DIGEST: switch (csp->csp_auth_alg) { case CRYPTO_BLAKE2B: case CRYPTO_BLAKE2S: break; default: return (EINVAL); } break; default: return (EINVAL); } return (CRYPTODEV_PROBE_ACCEL_SOFTWARE); } static int blake2_newsession(device_t dev, crypto_session_t cses, const struct crypto_session_params *csp) { struct blake2_softc *sc; struct blake2_session *ses; int error; sc = device_get_softc(dev); ses = crypto_get_driver_session(cses); rw_rlock(&sc->lock); if (sc->dying) { rw_runlock(&sc->lock); return (EINVAL); } rw_runlock(&sc->lock); error = blake2_cipher_setup(ses, csp); if (error != 0) { CRYPTDEB("setup failed"); return (error); } return (0); } static int blake2_process(device_t dev, struct cryptop *crp, int hint __unused) { struct blake2_session *ses; int error; ses = crypto_get_driver_session(crp->crp_session); error = blake2_cipher_process(ses, crp); crp->crp_etype = error; crypto_done(crp); return (0); } static device_method_t blake2_methods[] = { DEVMETHOD(device_identify, blake2_identify), DEVMETHOD(device_probe, blake2_probe), DEVMETHOD(device_attach, blake2_attach), DEVMETHOD(device_detach, blake2_detach), DEVMETHOD(cryptodev_probesession, blake2_probesession), DEVMETHOD(cryptodev_newsession, blake2_newsession), DEVMETHOD(cryptodev_process, blake2_process), DEVMETHOD_END }; static driver_t blake2_driver = { "blaketwo", blake2_methods, sizeof(struct blake2_softc), }; static devclass_t blake2_devclass; DRIVER_MODULE(blake2, nexus, blake2_driver, blake2_devclass, 0, 0); MODULE_VERSION(blake2, 1); MODULE_DEPEND(blake2, crypto, 1, 1, 1); static bool blake2_check_klen(const struct crypto_session_params *csp, unsigned klen) { if (csp->csp_auth_alg == CRYPTO_BLAKE2S) return (klen <= BLAKE2S_KEYBYTES); else return (klen <= BLAKE2B_KEYBYTES); } static int blake2_cipher_setup(struct blake2_session *ses, const struct crypto_session_params *csp) { int hashlen; CTASSERT((size_t)BLAKE2S_OUTBYTES <= (size_t)BLAKE2B_OUTBYTES); if (!blake2_check_klen(csp, csp->csp_auth_klen)) return (EINVAL); if (csp->csp_auth_mlen < 0) return (EINVAL); switch (csp->csp_auth_alg) { case CRYPTO_BLAKE2S: hashlen = BLAKE2S_OUTBYTES; break; case CRYPTO_BLAKE2B: hashlen = BLAKE2B_OUTBYTES; break; default: return (EINVAL); } if (csp->csp_auth_mlen > hashlen) return (EINVAL); if (csp->csp_auth_mlen == 0) ses->mlen = hashlen; else ses->mlen = csp->csp_auth_mlen; return (0); } static int blake2b_applicator(void *state, const void *buf, u_int len) { int rc; rc = blake2b_update(state, buf, len); if (rc != 0) return (EINVAL); return (0); } static int blake2s_applicator(void *state, const void *buf, u_int len) { int rc; rc = blake2s_update(state, buf, len); if (rc != 0) return (EINVAL); return (0); } static int blake2_cipher_process(struct blake2_session *ses, struct cryptop *crp) { union { blake2b_state sb; blake2s_state ss; } bctx; char res[BLAKE2B_OUTBYTES], res2[BLAKE2B_OUTBYTES]; const struct crypto_session_params *csp; struct fpu_kern_ctx *ctx; const void *key; int ctxidx; bool kt; int error, rc; unsigned klen; ctx = NULL; ctxidx = 0; error = EINVAL; kt = is_fpu_kern_thread(0); if (!kt) { ACQUIRE_CTX(ctxidx, ctx); fpu_kern_enter(curthread, ctx, FPU_KERN_NORMAL | FPU_KERN_KTHR); } csp = crypto_get_params(crp->crp_session); if (crp->crp_auth_key != NULL) key = crp->crp_auth_key; else key = csp->csp_auth_key; klen = csp->csp_auth_klen; switch (csp->csp_auth_alg) { case CRYPTO_BLAKE2B: if (klen > 0) rc = blake2b_init_key(&bctx.sb, ses->mlen, key, klen); else rc = blake2b_init(&bctx.sb, ses->mlen); if (rc != 0) goto out; error = crypto_apply(crp, crp->crp_payload_start, crp->crp_payload_length, blake2b_applicator, &bctx.sb); if (error != 0) goto out; rc = blake2b_final(&bctx.sb, res, ses->mlen); if (rc != 0) { error = EINVAL; goto out; } break; case CRYPTO_BLAKE2S: if (klen > 0) rc = blake2s_init_key(&bctx.ss, ses->mlen, key, klen); else rc = blake2s_init(&bctx.ss, ses->mlen); if (rc != 0) goto out; error = crypto_apply(crp, crp->crp_payload_start, crp->crp_payload_length, blake2s_applicator, &bctx.ss); if (error != 0) goto out; rc = blake2s_final(&bctx.ss, res, ses->mlen); if (rc != 0) { error = EINVAL; goto out; } break; default: panic("unreachable"); } if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) { crypto_copydata(crp, crp->crp_digest_start, ses->mlen, res2); if (timingsafe_bcmp(res, res2, ses->mlen) != 0) return (EBADMSG); } else crypto_copyback(crp, crp->crp_digest_start, ses->mlen, res); out: if (!kt) { fpu_kern_leave(curthread, ctx); RELEASE_CTX(ctxidx, ctx); } return (error); } Index: head/sys/crypto/via/padlock.h =================================================================== --- head/sys/crypto/via/padlock.h (revision 366671) +++ head/sys/crypto/via/padlock.h (revision 366672) @@ -1,91 +1,87 @@ /*- * Copyright (c) 2005-2006 Pawel Jakub Dawidek * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _PADLOCK_H_ #define _PADLOCK_H_ #include #include -#if defined(__i386__) -#include -#elif defined(__amd64__) #include -#endif union padlock_cw { uint64_t raw; struct { u_int round_count : 4; u_int algorithm_type : 3; u_int key_generation : 1; u_int intermediate : 1; u_int direction : 1; u_int key_size : 2; u_int filler0 : 20; u_int filler1 : 32; u_int filler2 : 32; u_int filler3 : 32; } __field; }; #define cw_round_count __field.round_count #define cw_algorithm_type __field.algorithm_type #define cw_key_generation __field.key_generation #define cw_intermediate __field.intermediate #define cw_direction __field.direction #define cw_key_size __field.key_size #define cw_filler0 __field.filler0 #define cw_filler1 __field.filler1 #define cw_filler2 __field.filler2 #define cw_filler3 __field.filler3 struct padlock_session { union padlock_cw ses_cw __aligned(16); uint32_t ses_ekey[4 * (RIJNDAEL_MAXNR + 1) + 4] __aligned(16); /* 128 bit aligned */ uint32_t ses_dkey[4 * (RIJNDAEL_MAXNR + 1) + 4] __aligned(16); /* 128 bit aligned */ struct auth_hash *ses_axf; uint8_t *ses_ictx; uint8_t *ses_octx; int ses_mlen; struct fpu_kern_ctx *ses_fpu_ctx; }; #define PADLOCK_ALIGN(p) (void *)(roundup2((uintptr_t)(p), 16)) int padlock_cipher_setup(struct padlock_session *ses, const struct crypto_session_params *csp); int padlock_cipher_process(struct padlock_session *ses, struct cryptop *crp, const struct crypto_session_params *csp); bool padlock_hash_check(const struct crypto_session_params *csp); int padlock_hash_setup(struct padlock_session *ses, const struct crypto_session_params *csp); int padlock_hash_process(struct padlock_session *ses, struct cryptop *crp, const struct crypto_session_params *csp); void padlock_hash_free(struct padlock_session *ses); #endif /* !_PADLOCK_H_ */ Index: head/sys/i386/include/fpu.h =================================================================== --- head/sys/i386/include/fpu.h (nonexistent) +++ head/sys/i386/include/fpu.h (revision 366672) @@ -0,0 +1,6 @@ +/*- + * This file is in the public domain. + * + * $FreeBSD$ + */ +#include Property changes on: head/sys/i386/include/fpu.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property