NovaPay processes millions of card transactions per day and needs a lightweight fraud model for near-real-time scoring. The ML team wants to evaluate how activation function choice affects training stability, nonlinearity, and fraud detection performance in a feed-forward neural network.
You are given a transaction-level binary classification dataset derived from 6 months of payment activity.
| Feature Group | Count | Examples |
|---|---|---|
| Transaction amount and velocity | 8 | amount, amount_zscore_24h, tx_count_1h, tx_count_24h |
| Merchant and channel | 6 | merchant_category, entry_mode, ecommerce_flag, device_type |
| Customer behavior | 7 | avg_ticket_30d, chargeback_rate_90d, country_match, account_age_days |
| Time features | 4 | hour_of_day, day_of_week, weekend_flag, seconds_since_last_tx |
| Risk signals | 5 | ip_risk_score, email_domain_risk, card_bin_risk, prior_declines |
is_fraud (1 if confirmed fraud within 14 days, else 0)A strong solution should demonstrate why activation functions are necessary, compare at least three options (for example ReLU, tanh, and sigmoid in hidden layers), and achieve PR-AUC > 0.45 with recall >= 0.75 at precision >= 0.20 on the test set.