We have a really exciting project in AI
And looking to have some thoughts from the development community on some technical questions.
Any developers with experience in AI or machine learning, it would be great to have your thoughts on the below queries.
The questions are:
1. Developing algorithms that speed up data processing over complex network architectures to train a machine learning model.
a) What is the algorithm computational complexity with respect to time, number of nodes and branches of the network;
b) How much speed improvement has the algorithm shown?;
c) Has the algorithm's performance benchmarking been tested?, and if yes, against which ones?;
d) Are the algorithms designed fro scratch or is it based on off-the-shelf solutions?;
e) What metrics are you using while determining the effectiveness of training the machine learning model?;
f) What machine learning models are you experimenting with exactly?;
g) Have you compared the training time of your machine learning model with any other similar model? If yes, have you conducted a test of significance for your observed training time?
2. Optimization of database query structures.
a) What is the speed of executing queries on an average with the complete fetch?;
b) What is the computational complexity of the database query with respect to time and search depth?;
c) How does the computational complexity vary with the model training time?;
d) Is there any optimal point with respect to supporting queries across multiple databases, and the depth to which a search is conducted to retrieve the relevant data fields?;
e) How is the performance of the query executions with respect to dynamic memory consumption? Are you doing anything to optimize memory use?;
f) How are you processing data through the model, given that data set partitioning would be carried out at the operating system level and then re-assembled?;
g) Are you using any specific caching technique to speed up the query execution process? If yes, what is it?