Updated on 2025/06/30

写真a

 
TODA Koji
 
Organization
Faculty of Information Engineering Department of Computer Science and Engineering Associate Professor
Graduate School of Engineering Master's program Computer Science and Engineering Associate Professor
Title
Associate Professor
Contact information
メールアドレス
External link

Degree

  • 博士(工学)

Research Areas

  • Informatics / Software

  • Informatics / Software

Education

  • Nara Institute of Science and Technology

  • Osaka University

Professional Memberships

  • 電子情報通信学会

  • IEEE

  • 情報処理学会

Committee Memberships

  • 情報処理学会   ソフトウェア工学研究運営委員  

    2024.4   

      More details

  • 日本ソフトウェア科学会   ソフトウェアの基礎ワークショップ(FOSE2024)共同実行委員長  

    2023.12   

      More details

  • 情報処理学会   論文誌ジャーナル編集委員  

    2022.6   

      More details

Papers

  • The Impact of Defect (Re) Prediction on Software Testing

    MURAKAMI Yukasa, YAMASAKI Yuta, TSUNODA Masateru, MONDEN Akito, TAHIR Amjed, BENNIN Kwabena Ebo, TODA Koji, NAKASAI Keitaro

    IEICE Transactions on Information and Systems   E108.D ( 3 )   175 - 179   2025.3

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Cross-project defect prediction (CPDP) aims to use data from external projects as historical data may not be available from the same project. In CPDP, deciding on a particular historical project to build a training model can be difficult. To help with this decision, a Bandit Algorithm (BA) based approach has been proposed in prior research to select the most suitable learning project. However, this BA method could lead to the selection of unsuitable data during the early iteration of BA (i.e., early stage of software testing). Selecting an unsuitable model can reduce the prediction accuracy, leading to potential defect overlooking. This study aims to improve the BA method to reduce defects overlooking, especially during the early testing stages. Once all modules have been tested, modules tested in the early stage are re-predicted, and some modules are retested based on the re-prediction. To assess the impact of re-prediction and retesting, we applied five kinds of BA methods, using 8, 16, and 32 OSS projects as learning data. The results show that the newly proposed approach steadily reduced the probability of defect overlooking without degradation of prediction accuracy.</p>

    DOI: 10.1587/transinf.2024MPL0002

    Scopus

    CiNii Research

  • Building Defect Prediction Models by Online Learning Considering Defect Overlooking

    FEDOROV Nikolay, YAMASAKI Yuta, TSUNODA Masateru, MONDEN Akito, TAHIR Amjed, BENNIN Kwabena Ebo, TODA Koji, NAKASAI Keitaro

    IEICE Transactions on Information and Systems   E108.D ( 3 )   170 - 174   2025.3

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Building defect prediction models based on online learning can enhance prediction accuracy. It continuously rebuilds a new prediction model while adding new data points. However, a module predicted as “non-defective” can result in fewer test cases for such modules. Thus, a defective module can be overlooked during testing. The erroneous test results are used as learning data by online learning, which could negatively affect prediction accuracy. To suppress the negative influence, we propose to apply a method that fixes the prediction as positive during the initial stage of online learning. Additionally, we improved the method to consider the probability of defect overlooking. In our experiment, we demonstrate this negative influence on prediction accuracy and the effectiveness of our approach. The results show that our approach did not negatively affect AUC but significantly improved recall.</p>

    DOI: 10.1587/transinf.2024MPL0001

    Scopus

    CiNii Research

  • On Applying Bandit Algorithm to Fault Localization Techniques

    Masato Nakao, Kensei Hamamoto, Masateru Tsunoda, Amjed Tahir, Koji Toda, Akito Monden, Keitaro Nakasai, Kenichi Matsumoto

    2024 IEEE 35th International Symposium on Software Reliability Engineering Workshops (ISSREW)   111 - 112   2024

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    DOI: 10.1109/ISSREW63542.2024.00060

    Scopus

    researchmap

  • On the Application of Bandit Algorithm for Selecting Clone Detection Methods

    TSUNODA Masateru, KUDO Takuto, MONDEN Akito, TAHIR Amjed, BENNIN Kwabena Ebo, TODA Koji, NAKASAI Keitaro, MATSUMOTO Kenichi

    IEICE Transactions on Information and Systems   advpub ( 0 )   2024

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Various clone detection methods have been proposed, with results varying depending on the combination of the methods and hyperparameters used (i.e., configurations). To help select a suitable clone detection configuration, we propose two Bandit Algorithm (BA) based methods that can help evaluate the configurations used dynamically while using detection methods. Our analysis showed that the two proposed methods, the naïve method and BANC (BA considering Negative Cases), identified the best configurations from four used code clone detection methods with high probability.</p>

    DOI: 10.1587/transinf.2024iil0002

    CiNii Research

  • Selecting Source Code Generation Tools Based on Bandit Algorithms

    TSUNODA Masateru, SHIMA Ryoto, TAHIR Amjed, BENNIN Kwabena Ebo, MONDEN Akito, TODA Koji, NAKASAI Keitaro

    IEICE Transactions on Information and Systems   advpub ( 0 )   2024

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Background: Code s generation tools such as GitHub Copilot have received attention due to their performance in generating code. Generally, a prior analysis of their performance is needed to select new code-generation tools from a list of candidates. Without such analysis, there is a higher risk of selecting an ineffective tool, which would negatively affect software development productivity. Additionally, conducting prior analysis of new code generation tools is often time-consuming. Aim: To use a new code generation tool without prior analysis but with low risk, we propose to evaluate the new tools during software development (i.e., online optimization). Method: We apply the bandit algorithm (BA) approach to help select the best code suggestion or generation tool among a list of candidates. Developers evaluate whether the result of the tool is correct or not. When code generation and evaluation are repeated, the evaluation results are saved. We utilize the stored evaluation results to select the best tool based on the BA approach. In our preliminary analysis, we evaluated five tools with 164 code-generation cases using BA. Result: BA approach selected ChatGPT as the best tool as the evaluation proceeded, and during the evaluation, the average accuracy by BA approach outperformed the second-best performing tool. Our results reveal the feasibility and effectiveness of BA in assisting the selection of best-performing code suggestion or generation tools.</p>

    DOI: 10.1587/transinf.2024iil0001

    CiNii Research

  • An Empirical Study of the Impact of Test Strategies on Online Optimization for Ensemble-Learning Defect Prediction

    Hamamoto K., Tsunoda M., Tahir A., Bennin K.E., Monden A., Toda K., Nakasai K., Matsumoto K.

    Proceedings 2024 IEEE International Conference on Software Maintenance and Evolution Icsme 2024   642 - 647   2024

     More details

    Publisher:Proceedings 2024 IEEE International Conference on Software Maintenance and Evolution Icsme 2024  

    Ensemble learning methods have been used to enhance the reliability of defect prediction models. However, there is an inconclusive stability of a single method attaining the highest accuracy among various software projects. This work aims to improve the performance of ensemble-learning defect prediction among such projects by helping select the highest accuracy ensemble methods. We employ bandit algorithms (BA), an online optimization method, to select the highest-accuracy ensemble method. Each software module is tested sequentially, and bandit algorithms utilize the test outcomes of the modules to evaluate the performance of the ensemble learning methods. The test strategy followed might impact the testing effort and prediction accuracy when applying online optimization. Hence, we analyzed the test order's influence on BA's performance. In our experiment, we used six popular defect prediction datasets, four ensemble learning methods such as bagging, and three test strategies such as testing positive-prediction modules first (PF). Our results show that when BA is applied with PF, the prediction accuracy improved on average, and the number of found defects increased by 7% on a minimum of five out of six datasets (although with a slight increase in the testing effort by about 4% from ordinal ensemble learning). Hence, BA with PF strategy is the most effective to attain the highest prediction accuracy using ensemble methods on various projects.

    DOI: 10.1109/ICSME58944.2024.00066

    Scopus

  • A Novel Approach to Address External Validity Issues in Fault Prediction Using Bandit Algorithms Reviewed

    Teruki HAYAKAWA, Masateru TSUNODA, Koji TODA, Keitaro NAKASAI, Amjed TAHIR, Kwabena Ebo BENNIN, Akito MONDEN, Kenichi MATSUMOTO

    IEICE Transactions on Information and Systems   E104.D ( 2 )   327 - 331   2021

  • カバレッジに基づくファジングツールの比較評価 Reviewed

    都築 夏樹, 吉田 則裕, 戸田 航史, 山本 椋太, 高田 広章

    コンピュータ ソフトウェア   37 ( 2 )   97 - 103   2020.5

  • Capturing Spotaneous Software Evolution in a Social Coding Platform with Project-as-a-City Concept. International Journal of Software Innovation Reviewed

    Koji Toda, Haruaki Tamada, Masahide Nakamura, Kenichi Matsumoto

    International Journal of Software Innovation   8 ( 3 )   35 - 50   2020

  • Evaluation of Software Fault Prediction Models Considering Faultless Cases Reviewed

    Yukasa Murakami, Masateru Tsunoda, Koji Toda

    IEICE Transactions on Information and Systems   E103.D ( 6 )   1319 - 1327   2020

  • 工数予測における6種類の欠損値補完手法の実証的評価 Reviewed

    戸田 航史, 角田雅照

    コンピュータ ソフトウェア   36 ( 4 )   95 - 106   2019.12

  • 重回帰分析を用いた工数予測における欠損値補完手法の性能比較 Reviewed

    戸田 航史,角田雅照

    コンピュータ ソフトウェア   34 ( 4 )   150 - 155   2017.10

  • コードレビュー分析におけるデータクレンジングの影響調査 Reviewed

    戸田航史, 亀井靖高, 吉田則裕

    情報処理学会論文誌,   58 ( 4 )   845 - 854   2017.4

  • OSS開発における管理者と修正者の社会的関係を考慮した不具合修正時間予測 Reviewed

    吉行勇人, 大平雅雄, 戸田航史

    コンピュータソフトウェア   32 ( 2 )   128 - 134   2015.4

  • Chromiumプロジェクトにおけるレビュー・パッチ開発経験がレビューに要する時間に与える影響の分析 Reviewed

    戸田航史, 亀井靖高, 濱崎一樹, 吉田則裕

    コンピュータソフトウェア   32 ( 1 )   227 - 233   2015.3

  • 11種類のfault密度予測モデルの実証的評価, 電子情報通信学会 Reviewed

    小林 寛武, 戸田 航史, 亀井 靖高, 門田 暁人, 峯 恒憲, 鵜林 尚靖

    電子情報通信学会   J96-D ( 8 )   1892 - 1902   2013.8

  • Revisiting Software Development Effort Estimation Based on Early Phase Development Activities

    Masateru Tsunoda, Koji Toda, Kyohei Fushida, Yasutaka Kamei, Meiyappan Nagappan, and Naoyasu Ubayashi

    In Proc. of Working Conference on Mining Software Repositories (MSR 2013)   429 - 438   2013.5

  • 重回帰分析とプロジェクト類似性を用いたハイブリッド工数見積もり方法の提案, コンピュータ ソフトウェア Reviewed

    戸田 航史, 角田雅照, 門田 暁人, 松本 健一

    コンピュータ ソフトウェア   30 ( 2 )   227 - 233   2013.4

  • An Ensemble Approach of Simple Regression Models to Cross-Project Fault Prediction

    Satoshi Uchigaki, Shinji Uchida, Koji Toda, Akito Monden, Toshihiko Nakano, and Yutaka Fukuchi

    In Proc. of International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2012)   2012.8

  • Automated classification of user's statement in requirement specification phase

    Koji Toda and Ken-ichi Matsumoto

    In Proc. of Joint Conference of International Workshop on Software Measurement and International Conference on Software Process and Product Measurement (IWSM/Mensura)   2011.11

  • Fit Data Selection Based on Project Features for Software Effort Estimation Models

    Koji Toda, Akito Monden, Ken-ichi Matsumoto

    In Advances in Computer Science and Engineering (ACSE2010)   82 - 88   2010.3

  • ソフトウェア開発工数予測のためのフィットデータ選定方法, 情報処理学会論文誌 Reviewed

    戸田 航史, 門田 暁人, 松本 健一

    情報処理学会論文誌   50 ( 11 )   2699 - 2709   2009.11

  • 工数予測における類似性に基づく欠損値補完法の実験的評価 Reviewed

    田村 晃一, 柿元 健, 戸田 航史, 角田 雅照, 門田 暁人, 松本 健一, 大杉 直樹

    コンピュータ ソフトウェア   26 ( 3 )   44 - 55   2009.8

  • Empirical Evaluation of Missing Data Techniques for Effort Estimation

    Koichi Tamura, Takeshi Kakimoto, Koji Toda, Masateru Tsunoda, Akito Monden, and Ken-ichi Matsumoto

    In Proc. of International Workshop on Software Productivity Analysis And Cost Estimation (SPACE2008)   4 - 9   2008.12

  • Fit Data Selection for Software Effort Estimation Models

    Koji Toda, Akito Monden, and Ken-ichi Matsumoto

    In Proc. of the 2nd international symposium on Empirical software engineering and measurement (ESEM'08)   360 - 361   2008.10

  • プロジェクト類似性に基づく工数見積もりに適した変数選択法 Reviewed

    瀧 進也, 戸田 航史, 門田 暁人, 柿元 健, 角田 雅照, 大杉 直樹, 松本 健一

    情報処理学会論文誌   49 ( 7 )   2338 - 2348   2008.7

  • 上流工程での活動実績を用いたソフトウェア開発工数見積もり方法の定量的評価 Reviewed

    角田 雅照, 戸田 航史, 伏田 享平, 亀井 靖高, 鵜林 尚靖

    コンピュータ ソフトウェア   31 ( 2 )   129 - 143  

▼display all

Research Projects

  • ファジングが発見した不具合の自動修正技術

    Grant number:24K02923  2024.4 - 2028.3

    日本学術振興会  科学研究費助成事業  基盤研究(B)

      More details

    Authorship:Coinvestigator(s)  Grant type:Competitive

    Grant amount:\18330000 ( Direct Cost: \14100000 、 Indirect Cost:\4230000 )

  • 自発的ソフトウェア進化の加速に向けた基礎技術の開発

    Grant number:17H00731  2017.4 - 2020.3

    日本学術振興会  科学研究費助成事業  基盤研究(A)

      More details

    Authorship:Coinvestigator(s)  Grant type:Competitive

    Grant amount:\44850000 ( Direct Cost: \34500000 、 Indirect Cost:\10350000 )

Teaching Experience (On-campus)

  • 2024   Computer Architecture I

  • 2024   Probability and Statistics

  • 2024   Seminar on Fundamentals of Computer

  • 2024   Project-Based Training I

  • 2024   Project-Based Training II

  • 2024   Experiments of Computer Science and

  • 2024   Special Lectures in Computer Science and

  • 2024   Graduation Study

  • 2024   Seminar in Intelligent Information

  • 2023   Computer Architecture I

  • 2023   Probability and Statistics

  • 2023   Seminar on Fundamentals of Computer

  • 2023   Project-Based Training I

  • 2023   Project-Based Training II

  • 2023   Experiments of Computer Science and

  • 2023   Graduation Study

  • 2023   Seminar in Intelligent Information

  • 2022   Seminar on Fundamentals of Computer

  • 2022   Computer Architecture I

  • 2022   Project-Based Training I

  • 2022   Probability and Statistics

  • 2022   Experiments of Computer Science and

  • 2022   Special Lectures in Computer Science and

  • 2022   Graduation Study

  • 2022   Seminar in Intelligent Information

  • 2021   Seminar on Fundamentals of Computer

  • 2021   Probability and Statistics

  • 2021   Project-Based Training I

  • 2021   Computer Architecture II

  • 2021   Project-Based Training II

  • 2021   Experiments of Computer Science and

  • 2021   Special Lectures in Computer Science and

  • 2021   Graduation Study

  • 2021   Seminar in Intelligent Information

  • 2020   Seminar on Fundamentals of Computer

  • 2020   Project-Based Training I

  • 2020   Probability and Statistics

  • 2020   Computer Architecture II

  • 2020   Experiments of Computer Science and

  • 2020   Special Lectures in Computer Science and

  • 2020   Graduation Study

  • 2020   Seminar in Intelligent Information

  • 2019   Seminar on Fundamentals of Computer

  • 2019   Computer Architecture II

  • 2019   Project-Based Training I

  • 2019   Probability and Statistics

  • 2019   Experiments of Computer Science and

  • 2019   Special Lectures in Computer Science and

  • 2019   Graduation Study

  • 2019   Seminar in Intelligent Information

  • 2018   Seminar on Fundamentals of Computer

  • 2018   Computer Architecture I

  • 2018   Computer Science

  • 2018   Project-Based Training I

  • 2018   Probability and Statistics

  • 2018   Project-Based Training II

  • 2018   English Presentation

  • 2018   Experiments of Computer Science and

  • 2018   Graduation Study

  • 2018   Seminar in Intelligent Information

  • 2017   Computer Science

  • 2017   Seminar on Fundamentals of Computer

  • 2017   Computer Architecture I

  • 2017   Project-Based Training I

  • 2017   Probability and Statistics

  • 2017   Experiments of Computer Science and

  • 2017   Special Lectures in Computer Science and

  • 2017   Graduation Study

  • 2016   Computer Architecture I

  • 2016   Seminar on Fundamentals of Computer

  • 2016   Automata and Formal Languages

  • 2016   Probability and Statistics

  • 2016   Experiments of Computer Science and

  • 2016   Special Lectures in Computer Science and

  • 2016   English Presentation

  • 2016   Graduation Study

▼display all