Secure Autonomous Systems

Instructor: Prof. Sibin Mohan, The George Washington University

CSCI 6907/3907 | Fall 2022 Term | TR 12:45PM - 02:00PM ET | SMTH 115

MP III: V2X Misbehavior Detection Contest in VEINS


Announcement Date Oct. 25, 2022
Submission Date Nov. 7, 2022
Submission Time 11:59 PM PT
Total Points 40


In this MP, we will use our previous experience on V2X networks and misbehavior mechanisms to implement our own misbehavior detector (MBD).

You are free to implement your own MBD solution, or use any existing MBD solution.

We have provided you with a training and testing dataset. You are free to use the training dataset to implement your own MBD solution. You will be asked to use your MBD solution on the messages in test dataset and upload the results to the website for benchmarking.

The link to training and test dataset is:

Objectives of this MP are as follows:


We already made preliminary simulations for training and test sets and recorded the V2X data using VEINS and F2MD Simulators (from MP I-A). You can download the training and test sets from the links and use them to implement your custom MBD solution offline and check its performance, without having to integrate with VEINS simulator.

MP3 instructions

Here, we define the details for the training and test datasets. The training dataset includes records for 6298 vehicles while test dataset contains 4369 vehicles, from a total of 8 hours simulation.

  1. traceJSON-$vehicleID$-$A0-15$ files represent the received messages for each vehicle in the simulation. For instance, the name of chosen traceJSON-33-A7 represents:
  2. Every traceJSON file has 2 types of messages:
  3. Both type:2 and type:3 messages contain the following information, which you should utilize for implementing your MBD:
  4. Loop over all the records on the Veremi file names (easy way) to extract attacker and genuine vehicle ID's (A0 are genuine and others are attackers). These records will serve as the ground truth labels for us to evaluate the performance of our MBD app:
    Note that these labels are masked for the test dataset, which you should use to find attack messages and upload your labels to evaluate your MBD .

  5. Submission Instructions

    1. Use the training dataset to implement your MBD and create detection labels for each message in testing dataset. You should mark the attack messages with 1 and the genuine messages with 0.
    2. Prepare the detection label file as a simple CSV file with two columns:
      messageID and label.

      messageID is the unique messageID and the value for the label is the attack label of the message (0 for genuine messages and 1 for attacks):


      You should sort your labels file based on the messageID column before uploading.
    3. A sample detection label file (for the training set) is provided in the training_key.csv file, which has ground truth message labels for each messageID in training set.

      Prepare your labels in the same format for test set, and upload it to website for checking your performance.

    4. For your convenience, a template file for test data is also provided in the test_template_key.csv file.

      However, labels in this file are masked (all zero). You should fill in the correct labels for each messageID in the test set using your MBD and upload your test key file, having the same format and messageIDs with test_template_key.csv.
    5. You may make unlimited uploads of your detection labels until the deadline. You may choose to publish your current performance on the public dashboard, or you may keep it private. You will see the performance of others on the public dashboard, and who has currently the best performance.
    The dashboard is available at: Contest Dashboard

    Instructions on how to use the dashboard:
    1. A public leaderboard is visible on the dashboard homepage, where you can see the performance of all participants with submission details. The "score" calculated in F-1, but you are notified of your recall after each submission as well, for your reference.
    2. To start using it, register with a username and password of your choice on the register tab. After registering, you can sign-in with your credentials from the main dashboard homepage.
    3. To upload your solution, after signing in, choose public as the submission type, choose your labels file (test_template.csv with your labels) and upload it. You will be notified of your performance after submission in less than 30 seconds and the public dashboard score will be updated.
    4. You can also upload your results privately, choose private as the submission type, choose your labels file (test_template.csv with your labels) and upload it. You will be notified of your performance after submission in less than 30 seconds, and you can see your submission details on your private dashboard. This will not update the public dashboard.
    5. You can make unlimited submissions. Only the performance of your best submission so far will show up on the public dashboard.

    You should submit the following for MP-3:
    1. Test labels from your MBD to the dashboard website. An F-1 score above 0.6 will receive full points from this section. [10 pt]
    2. The following artifacts [30 pt]:
      • A 2-3 page report describing your MBD system/logic (15 pt)
      • discussion of your results (10 pt)
      • reference to your code implementation(5 pt).
      Note that your code does not have to be "submission clean" just yet. The code is just a complementary artifact to your report, and will allow us to give more beneficial feedback for future improvements.
    3. The winner of the "contest" (i.e. whoever is at the head of the leaderboard at the deadline) will receive bonus points [10 pt]

    Sample MBD Implementations

    Here are some sample implementations of existing MBD research for your reference:
    1. VeReMi: A Dataset for Comparable Evaluation of Misbehavior Detection in VANETs
        This paper includes general information about the dataset, evaluation methods, and plausibility detectors with their performance on attacks
    2. VeReMi Extension: A Dataset for Comparable Evaluation of Misbehavior Detection in VANETs
        This paper includes general information about the dataset with added attacks, evaluation methods, and two detection approaches with plausibility detectors, including their performance on attacks.
    3. F2MD: Simulation Framework for Misbehavior Detection in Vehicular Networks
        This is the original paper describing the F2MD framework, our simulator in previous MBDs. The paper and source code in our simulator includes multiple MBD approaches with code implementation, including plausibility and ML based ones.
    4. CaTch: A Confidence Range Tolerant Misbehavior Detection Approach
        Another paper that implements MBD with confidence range tolerance, using the same BSM fields with our dataset.
    5. Vehicle Behavior Analysis to Enhance Security in VANETs
        This paper includes a new approach to MBD, which is based on a vehicle behavior and trust analysis.
    6. Trust On the Security of Wireless Vehicular Ad-hoc Networking
        A simple trust logic implementation based on the distance between sender and receiver.
    7. Trust mechanisms in wireless sensor networks:Attack analysis and countermeasures
        A paper from similar discipline that describes the attack analysis and countermeasures for defense mechanisms against similar attacks in wireless sensor networks.
    8. MisDis: An Efficent Misbehavior Discovering Method Based on Accountability and State Machine in VANET
        Another MBD implementation that implements MBD solution based on the accountability of broadcast BSMs.


    3. Here is another well-put VEINS tutorial for reference:
    4. A custom simulation tutorial in VEINS: