Optimal stochastic gradient descent algorithm for filtering

buir.contributor.authorTuralı, Mehmet Yiğit
buir.contributor.authorKoç, Ali Taha
buir.contributor.authorKozat, Süleyman Serdar
buir.contributor.orcidTuralı, Mehmet Yiğit|0000-0002-6147-1741
dc.citation.epage104731-6
dc.citation.spage104731-1
dc.citation.volumeNumber155
dc.contributor.authorTuralı, Mehmet Yiğit
dc.contributor.authorKoç, Ali Taha
dc.contributor.authorKozat, Süleyman Serdar
dc.date.accessioned2025-02-17T13:12:24Z
dc.date.available2025-02-17T13:12:24Z
dc.date.issued2024-12
dc.departmentDepartment of Electrical and Electronics Engineering
dc.description.abstractStochastic Gradient Descent (SGD) is a fundamental optimization technique in machine learning, due to its efficiency in handling large-scale data. Unlike typical SGD applications, which rely on stochastic approximations, this work explores the convergence properties of SGD from a deterministic perspective. We address the crucial aspect of learning rate settings, a common obstacle in optimizing SGD performance, particularly in complex environments. In contrast to traditional methods that often provide convergence results based on statistical expectations (which are usually not justified), our approach introduces universally applicable learning rates. These rates ensure that a model trained with SGD matches the performance of the best linear filter asymptotically, applicable irrespective of the data sequence length and independent of statistical assumptions about the data. By establishing learning rates that scale as 𝜇 = 𝑂 ( 1/𝑡 ), we offer a solution that sidesteps the need for prior data knowledge, a prevalent limitation in real-world applications. To this end, we provide a robust framework for SGD's application across varied settings, guaranteeing convergence results that hold under both deterministic and stochastic scenarios without any underlying assumptions.
dc.embargo.release2026-12
dc.identifier.doi10.1016/j.dsp.2024.104731
dc.identifier.eissn1095-4333
dc.identifier.issn1051-2004
dc.identifier.urihttps://hdl.handle.net/11693/116330
dc.language.isoEnglish
dc.publisherElsevier
dc.relation.isversionofhttps://dx.doi.org/10.1016/j.dsp.2024.104731
dc.rightsCC BY 4.0 (Attribution 4.0 International Deed)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleDigital Signal Processing
dc.subjectLearning rate
dc.subjectLinear filtering
dc.subjectOptimization
dc.subjectStochastic gradient descent
dc.titleOptimal stochastic gradient descent algorithm for filtering
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Optimal_stochastic_gradient_descent_algorithm_for_filtering.pdf
Size:
555.39 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: