Adversarial Labeling Attacks are a proven method of degrading model metrics by flipping the labels of points in the training set or adding copies/imitations of points with the flipped label. The proposed idea is to identify the poisoned data points i.e. flipped data points using influence functions. Former ideas explored how perturbing the features of a data point in the training set influenced the loss calculation for the testing set. Now we see how perturbing the target aka flipping the label affects the loss calculation on the testing set and if we can identify the poisoned points.