In this work, we design and test a framework for neural decoding in Brain-Machine Interfaces based on the Perception Action Reward Cycle (PARC). Here the neural decoder in the BMI learns to translate motor neural states in the primary motor cortex (M1) into actions based on a reward signal estimated directly from Neucleus Accumbens (NAcc). The control architecture was designed based on the Actor-Critic method of Reinforcement Learning. We tested the decoding performance by simultaneous recording the M1 and NAcc neural data in a rat during a robot-assisted reaching task. This work shows that a BMI can be trained from a nave state to perform a reaching task using motor and error feedback signals directly from the brain.