In-memory computing (IMC) is gaining momentum for low-power acceleration of data intensive problems, particularly in the frame of artificial intelligence (AI). Analogue matrix-vector multiplication (MVM) in crosspoint arrays allows to dramatically improve energy efficiency in neural networks for inference and training [1]. Recently, it has been shown that a feedback configuration of MVM can accelerate the solution of generic linear matrix problems, such as linear systems, matrix inversion and eigenvector calculation, with a single computational step [2,3]. This is achieved by the high parallelism of MVM in the memory array, combined with the physical iteration executed naturally in the feedback loop. Eigenvector extraction allows to address general problems, such as PageRank of internet webpages, with complexity O(1), i.e., the time to solve a ranking problem remains fixed irrespective of the size of the coefficient matrix [4]. In this work, we address two specific problems of AI acceleration, namely regression and optimization. Linear regression can be viewed as one of the most basic problems of machine learning: given a sequence of points, one must predict the value of the next point in the sequence. This problem can be solved in just one computational step by using the closed-loop crosspoint array of Fig. 1, where the independent variables x_i are physically written within the two crosspoint arrays, while the dependent variables y_i are applied externally as input currents. The crosspoint arrays are connected within a closed-loop configuration by using operational amplifiers (OAs) in the rows and the columns of the arrays. The OA output yields the coefficients of the best fitting line along the sequence, i.e., the coefficients a and b in the line equation y= ax + b providing the least mean squares (LMS) regression of the sequence. This problem can be extended to any arbitrary dimension, i.e., not only data on x-y plane, but also data on x-y-z space and any hyperspace of n dimensions. The problem can also be extended to logistic regression, which enables classification in one computational step. In particular, the crosspoint regression circuit allows the training of a 2-layer perceptron in just one step, by assuming random weights in the first layer according to the extreme learning machine (ELM) approach. In-memory computing thus appears as a promising option for accelerating neural network training with complexity O(1). The continuous-time, analogue closed-loop approach can be extended to a discrete-time, digital recurrent neural network (RNN) for addressing optimization problems [5]. The RNN allows the efficient solution of constraint satisfaction problems (CSPs), such as Sudoku, by taking advantage of analogue MVM in the crosspoint array. In the RNN, stochastic iterations can allow the minimization of the cost function, hence the number of violated constraints, thus leading to the global minimum. Noise can be injected as spike jitter by phase change memory devices as integrate-and-fire neurons. The convergence to the solution can be efficiently accelerated by a 2-layer RNN, where the first layer is programmed to prevent the violation of constraints in the CSP. In-memory solution of CSPs can dramatically improve the energy efficiency and speed of data intensive optimization in transport, industry, and society in general. References [1] D. Ielmini and G. Pedretti, “Device and circuit architectures for in-memory computing,” Adv. Intell. Syst. 2000040 (2020). [2] Z. Sun, G. Pedretti, E. Ambrosi, A. Bricalli, W. Wang and D. Ielmini, “Solving matrix equations in one step with crosspoint resistive arrays,” PNAS 116 (10) 4123-4128 (2019). [3] Z. Sun, G. Pedretti, A. Bricalli, D. Ielmini, “One-step regression and classification with crosspoint resistive memory arrays,” Sci. Adv. 6:eaay2378 (2020). [4] Z. Sun, G. Pedretti, E. Ambrosi, A. Bricalli, and D. Ielmini, “In-memory eigenvector computation in time O(1),” Adv. Intell. Sys. (2020). [5] G. Pedretti, P. Mannocci, S. Hashemkhani, V. Milo, O. Melnic, E. Chicca, and D. Ielmini, “A spiking recurrent neural network with phase change memory neurons and synapses for the accelerated solution of constraint satisfaction problems,” IEEE Journal of Exploratory Solid-State Computational Devices and Circuits 6 (2019). Figure 1