Explore >> Select a destination


You are here

www.assemblyai.com
| | blog.fastforwardlabs.com
10.6 parsecs away

Travel
| | Image from Social Soul, an immersive experience of being inside a social media stream, by Lauren McCarthy and Kyle McDonald A few weeks ago, theCUBE stopped by the Fast Forward Labs offices to interview us about our approach to innovation. In the interview, we highlighted that artists have an important role to play in shaping the future of machine intelligence. Unconstrained by market demands and product management requirements, artists are free to probe the potential of new technologies. And by optimizing for intuitive power or emotional resonance over theoretical accuracy or usability, they open channels to understand how machine intelligence is always, at its essence, a study of our own humanity.One provocative artist exploring the creative potential of new machine learning tools is Kyle McDonald. McDonald has seized the deep learning moment, undertaking projects that use neural networks to document a stroll down the Amsterdam canals, recreate images in the style of famous painters, or challenge our awareness of what we hold to be reality. We interviewed Kyle to understand how he understands his work. Keep reading for highlights:
| | stability.ai
5.2 parsecs away

Travel
| | Stable Audio represents the cutting-edge audio generation research by Stability AI's generative audio research lab, Harmonai. We continue to improve our model architectures, datasets, and training procedures to improve output quality, controllability, inference speed, and output length.
| | www.analyticsvidhya.com
9.0 parsecs away

Travel
| | Discover the unique prowess of Diffusion models, an innovative text-to-image system fueled by LLM. Seamlessly handling diverse inputs and more
| | iclr-blogposts.github.io
46.7 parsecs away

Travel
| The product between the Hessian of a function and a vector, the Hessian-vector product (HVP), is a fundamental quantity to study the variation of a function. It is ubiquitous in traditional optimization and machine learning. However, the computation of HVPs is often considered prohibitive in the context of deep learning, driving practitioners to use proxy quantities to evaluate the loss geometry. Standard automatic differentiation theory predicts that the computational complexity of an HVP is of the same order of magnitude as the complexity of computing a gradient. The goal of this blog post is to provide a practical counterpart to this theoretical result, showing that modern automatic differentiation frameworks, JAX and PyTorch, allow for efficient computation of these HVPs in standard deep learning cost functions.